Is O(commSize) storage considered unscalable in this model?<div><br></div><div>Dmitry.<br><br><div class="gmail_quote">On Thu, Nov 24, 2011 at 5:54 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="gmail_quote"><div class="im">On Thu, Nov 24, 2011 at 17:45, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>This not model my current model, but it is a possible one. I do not "ghost" the layout, only</div>
<div>the points.</div></blockquote><div><br></div></div><div>I wasn't asking that. How do you represent a ghosted point? I thought you do it by (rank, index). I have everything I need if the ghoster has (owner rank, index), or equivalently (modulo MPI_Scan), the global index of the point it is ghosting.</div>
<div class="im">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div></div><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_quote"><div>In that case, the fact that remote processes have (owner rank, offset) means that I can broadcast (local_offset, size) with purely local setup (this is the first primitive which can be implemented using MPI_Get()).</div>
</div></blockquote><div><br></div></div><div>Okay, I would really like this coded up. We can do a 1-D mesh of Lagrangian elements just to show me what is going on.</div></blockquote><div><br></div></div><div>Sure, it's simple.</div>
<div class="im">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>I am still not sure how much this buys you since you had to communicate that offset info somehow.</div>
</blockquote><div><br></div></div><div>It buys you that the next operation is non-synchronizing, has no user-visible packing, and does not have memory scalability issues if every process needs data from one point.</div><div class="im">
<div> </div>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_quote"><div>Alternatively, given (owner rank, offset, size), we can literally call VecScatterCreate() after just an MPI_Scan(), which is logarithmic, and local setup. but VecScatterCreate() does lots of unnecessary setup to build the two-way representation.</div>
<div><br></div><div><br></div><div>Redistributing a mesh after partitioning is slightly more demanding. First, senders are enumerated using a fetch-and-add initiated by the sending process which has the side-effect of counting the number of nodes that will be in the new partition and informing the senders of the offsets at which to deposit those nodes. Then we broadcast the (rank, offset) of each node on the sender to the receiver. Then we send connectivity using a non-uniform broadcast. Now, before moving data, we can reorder locally, inform the sender of the new ordering, and then move all the data.</div>
</div></blockquote><div><br></div></div><div>Great. This is exactly what i hate about this crap. It always seems specially coded for the problem. I think we can use exactly the</div><div>primitive above to do your non-uniform broadcast step above. Mesh topolgoy is jsut another function over the mesh points.</div>
</blockquote></div></div><br><div>Yeah, there is still a question of what to store (e.g. FV and DG can throw away vertices and edges, lowest-order FE can throw away faces and edges), but that is orthogonal to communication which I agree is just a (usually variable-sized) function on the mesh points.</div>
</blockquote></div><br></div>