Replies: 2 comments
-
|
Beta Was this translation helpful? Give feedback.
-
@rohanbabbar04 a few comments from the latest version of DistributedArray:
maybe worth wrapping it into a method or auxiliary function
|
Beta Was this translation helpful? Give feedback.
-
This discussion is meant to discuss the design choices of the
DistributedArray
class.All in all, I think it looks good and its design seems nice and clean. Also, you handle both of the scenarios we are interested in (global array divided into parts, and global array replicated to all ranks).
Here are few comments regarding the object itself (mostly stylistic):
np.ndarray
? I assume pretty much all methods will not work as they are unless they are not overwritten with distributed logic? From what I understand, maybe this allows you not to reimplement operations like add, subtract as you work with only the numpy arrays in each rank?dtype
tonp.float64
to be consistent with what we do in pylops. I would also always put it as last in the order of inputsrank
as member of theDistributedArray
class, especially for debugging purposes. So this is not called directly (base_comm.Get_rank()
) inside theto_dist
methodtype_part
is a bit of a strange name, I guess you could just call itpartition
as it took me a while to understand probably part was an abbreviation for partitionS
) intype_part
. For me scatter means that one rank (say rank0) has an array and scatters it to all other ranks... here we are more creating an array that is naturally distributed and fill each portion in each rank directly. Maybe others have an idea for a better name.SumArray = DistributedArray(self.global_shape, dtype=self.dtype)
does not settype_part
. Maybe we should also include some logic that if the two arrays have differenttype_part
an error is raised?A couple of more general comments:
Beta Was this translation helpful? Give feedback.
All reactions