On Thu, Nov 24, 2011 at 5:29 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><div class="im">On Thu, Nov 24, 2011 at 17:09, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>One key operation which has not yet been discussed is the "push forward" of a mapping as Dmitry put it. Here is a scenario:</div><div>We understand a matching of mesh points between processes.</div></blockquote>
<div><br></div></div><div>Can I assume that ghosting processes know (owner rank, offset) for each point?</div></div></blockquote><div><br></div><div>This not model my current model, but it is a possible one. I do not "ghost" the layout, only</div>
<div>the points. I cannot think of a reason you could not distribute this information, but it would</div><div>not remove the need for a "fuse" function for the data which gets communicated, that is</div><div>you cannot directly insert what you get from other processes.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="gmail_quote"><div> </div><div class="im"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
In order to construct a ghost communication (VecScatter), I</div>
<div>need to compose the mapping between mesh points and the mapping of mesh points to data.</div></blockquote><div><br></div></div><div>Can the mapping of mesh points to data be known through an array (of length num_owned_points) local_offset or, in the case of non-uniform data size, as the array (local_offset, size)?</div>
</div></blockquote><div><br></div><div>Yes.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="gmail_quote"><div>In that case, the fact that remote processes have (owner rank, offset) means that I can broadcast (local_offset, size) with purely local setup (this is the first primitive which can be implemented using MPI_Get()).</div>
</div></blockquote><div><br></div><div>Okay, I would really like this coded up. We can do a 1-D mesh of Lagrangian elements just to show me what is going on.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><div>Now the ghosting procs know (owner rank, offset, size) for each point, so again, with purely local setup, the scatter is defined (the forward scatter is implemented with MPI_Get(), the uses MPI_Accumulate() which does a specified reduction). Note that the actual communication has no user-visible packing because it uses an MPI_Datatype.</div>
</div></blockquote><div><br></div><div>I am still not sure how much this buys you since you had to communicate that offset info somehow.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><div>Alternatively, given (owner rank, offset, size), we can literally call VecScatterCreate() after just an MPI_Scan(), which is logarithmic, and local setup. but VecScatterCreate() does lots of unnecessary setup to build the two-way representation.</div>
<div><br></div><div><br></div><div>Redistributing a mesh after partitioning is slightly more demanding. First, senders are enumerated using a fetch-and-add initiated by the sending process which has the side-effect of counting the number of nodes that will be in the new partition and informing the senders of the offsets at which to deposit those nodes. Then we broadcast the (rank, offset) of each node on the sender to the receiver. Then we send connectivity using a non-uniform broadcast. Now, before moving data, we can reorder locally, inform the sender of the new ordering, and then move all the data.</div>
</div></blockquote><div><br></div><div>Great. This is exactly what i hate about this crap. It always seems specially coded for the problem. I think we can use exactly the</div><div>primitive above to do your non-uniform broadcast step above. Mesh topolgoy is jsut another function over the mesh points.</div>
<div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="gmail_quote"><div>I think these are all simple loops and single calls to the communication primitives.</div>
</div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>