[petsc-dev] Generality of VecScatter

Mark F. Adams mark.adams at columbia.edu
Thu Nov 24 09:34:49 CST 2011


On Nov 23, 2011, at 7:57 PM, Jed Brown wrote:

> On Wed, Nov 23, 2011 at 18:48, Dmitry Karpeev <karpeev at gmail.com> wrote:
> > I wonder if we can express all useful communication with a more restrictive interface similar to a local-to-global map (but mapping between any vectors) where one side (typically called "local", but doesn't need to actually be a local Vec) has at most one edge.
> That would disallow accumulating a MAX "locally".
> 
> I don't know what you mean. Say we have a scatter from X (sometimes "local") to Y (sometimes "global"). One way to describe my communication graph is as a single-valued function on X (containing either the index into Y or NULL). I'm asking whether the single-valued constraint is too restrictive for some important use case.
> 
> The primitives I can readily provide are
> 
> broadcast from Y to X
> reduce values on X into Y (min, max, sum, replace, etc)
> 
> gather values on X into arrays for each point in Y (in some ordering, rank ordering if you like)

this is awful.  you would at least need to have a way of getting the processors that it came from, the system would have to have this info, right?

also, i think you are getting to general here, i'm not sure why anyone would want to do this but i'm sure someone would and they can just write directly to to MPI-3.

> scatter values in arrays for each point in Y to the point in X
> 
> These latter two involve a data structure that is as big as the number of sharing processes (because that is explicitly being requested), but are still implemented where the owner of that point in Y has no knowledge of which processes it is communicating with. With the capability of hardware RMA, this can actually be more efficient than if the two-sided graph was available (and it's less code and fewer data structures).

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111124/58099c31/attachment.html>


More information about the petsc-dev mailing list