[petsc-dev] Generality of VecScatter

Jed Brown jedbrown at mcs.anl.gov
Fri Nov 25 12:15:34 CST 2011


On Fri, Nov 25, 2011 at 12:00, Mark F. Adams <mark.adams at columbia.edu>wrote:

> With my model, the owner never needs to be informed that some procs are
> ghosting another level, nor be aware what those ghosted nodes are. It may
> place some data in an array for the "pointwise broadcast" of connectivity,
> but it doesn't need semantic knowledge that that information will used to
> increase the ghosting. Similarly, any process can stop ghosting a point
> without informing the owner in any way.
>
>
> I don't see how you can do that, you must have a different data model then
> me.  We may need a white board for this but if I want to get an extra layer
> of ghosts I need to have the remote process tell me what they are.  I have
> a distributed graph so I need to be told who my new ghosts are.
>

Suppose for simplicity that the remote process stores it's connectivity as
a directed graph. In the local data structure, each vertex has on offset
into a connectivity array that lists the other vertices that it is
connected to. This is like CSR storage without the weights. We will
communicate this CSR storage in-place, without packing and without
knowledge of how many remote processes accessed it.

In order to ghost the original points, the ghosters needed to know (owner
rank, index). (This is my "native" representation for ghosting.) That means
that I can fetch offset and row length directly from the owner's "row
starts" array. With that, I can fetch the rows ("column indices") directly
from the owner's storage. Underneath my (thin) API, we'll just be using
MPI_Get() for these things.

The overall semantics are collective, in that the owner needs to provide a
send buffer and call MPI_Win_fence(), but it only provides one send buffer
(no packing), and each process gets what it needs out of that buffer (by
creating a suitable MPI_Datatype for MPI_Get()). The owner does not know
how many procs accessed the data or what they accessed.


> I was trying to avoid specifying an algorithm, but complete repartitioning
> is inherently complex.  I was thinking of a diffusive kind of thing with
> nearest neighbors.
>

Sure, we can do that sort of thing. There are a variety of ways to "claim"
an interface vertex, reducing with MAXLOC is one way.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111125/06e29dbd/attachment.html>


More information about the petsc-dev mailing list