<div class="gmail_quote">On Fri, Nov 25, 2011 at 12:00, Mark F. Adams <span dir="ltr"><<a href="mailto:mark.adams@columbia.edu">mark.adams@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div><div class="im"><blockquote type="cite"><div class="gmail_quote"><div>With my model, the owner never needs to be informed that some procs are ghosting another level, nor be aware what those ghosted nodes are. It may place some data in an array for the "pointwise broadcast" of connectivity, but it doesn't need semantic knowledge that that information will used to increase the ghosting. Similarly, any process can stop ghosting a point without informing the owner in any way.</div>
</div></blockquote><div><br></div></div><div>I don't see how you can do that, you must have a different data model then me. We may need a white board for this but if I want to get an extra layer of ghosts I need to have the remote process tell me what they are. I have a distributed graph so I need to be told who my new ghosts are.</div>
</div></blockquote><div> </div><div>Suppose for simplicity that the remote process stores it's connectivity as a directed graph. In the local data structure, each vertex has on offset into a connectivity array that lists the other vertices that it is connected to. This is like CSR storage without the weights. We will communicate this CSR storage in-place, without packing and without knowledge of how many remote processes accessed it.</div>
<div><br></div><div>In order to ghost the original points, the ghosters needed to know (owner rank, index). (This is my "native" representation for ghosting.) That means that I can fetch offset and row length directly from the owner's "row starts" array. With that, I can fetch the rows ("column indices") directly from the owner's storage. Underneath my (thin) API, we'll just be using MPI_Get() for these things.</div>
<div><br></div><div>The overall semantics are collective, in that the owner needs to provide a send buffer and call MPI_Win_fence(), but it only provides one send buffer (no packing), and each process gets what it needs out of that buffer (by creating a suitable MPI_Datatype for MPI_Get()). The owner does not know how many procs accessed the data or what they accessed.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div><div class="im"></div></div><div>I was trying to avoid specifying an algorithm, but complete repartitioning is inherently complex. I was thinking of a diffusive kind of thing with nearest neighbors. </div>
</blockquote></div><br><div>Sure, we can do that sort of thing. There are a variety of ways to "claim" an interface vertex, reducing with MAXLOC is one way.</div>