[petsc-dev] Generality of VecScatter

Mark F. Adams mark.adams at columbia.edu
Fri Nov 25 13:07:50 CST 2011


On Nov 25, 2011, at 1:15 PM, Jed Brown wrote:

> On Fri, Nov 25, 2011 at 12:00, Mark F. Adams <mark.adams at columbia.edu> wrote:
>> With my model, the owner never needs to be informed that some procs are ghosting another level, nor be aware what those ghosted nodes are. It may place some data in an array for the "pointwise broadcast" of connectivity, but it doesn't need semantic knowledge that that information will used to increase the ghosting. Similarly, any process can stop ghosting a point without informing the owner in any way.
> 
> I don't see how you can do that, you must have a different data model then me.  We may need a white board for this but if I want to get an extra layer of ghosts I need to have the remote process tell me what they are.  I have a distributed graph so I need to be told who my new ghosts are.
>  
> Suppose for simplicity that the remote process stores it's connectivity as a directed graph. In the local data structure, each vertex has on offset into a connectivity array that lists the other vertices that it is connected to. This is like CSR storage without the weights. We will communicate this CSR storage in-place, without packing and without knowledge of how many remote processes accessed it.
> 
> In order to ghost the original points, the ghosters needed to know (owner rank, index). (This is my "native" representation for ghosting.) That means that I can fetch offset and row length directly from the owner's "row starts" array. With that, I can fetch the rows ("column indices") directly from the owner's storage. Underneath my (thin) API, we'll just be using MPI_Get() for these things.
> 

OK, thats a good start.  You do need two communication steps, 1) get offsets, 2) get data (right?).  You avoid packing the send data but you need to unpack the receive data, converting the raw graph data into the reduced ghost data.  Not simple but writing from one side only and with only one-to-one communication is attractive.

So keep going.  AMR is good but I'm not sure I'm up to specifying a complete unstructured AMR algorithm at the moment.  My other example of AMG coarse grid process aggregation is another....

Or perhaps a parallel maximal independent set algorithm, preferably my algorithm which is implemented in GAMG and documented in an old paper.

Mark

> The overall semantics are collective, in that the owner needs to provide a send buffer and call MPI_Win_fence(), but it only provides one send buffer (no packing), and each process gets what it needs out of that buffer (by creating a suitable MPI_Datatype for MPI_Get()). The owner does not know how many procs accessed the data or what they accessed.
>  
> I was trying to avoid specifying an algorithm, but complete repartitioning is inherently complex.  I was thinking of a diffusive kind of thing with nearest neighbors.  
> 
> Sure, we can do that sort of thing. There are a variety of ways to "claim" an interface vertex, reducing with MAXLOC is one way.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111125/2109a753/attachment.html>


More information about the petsc-dev mailing list