I talked to Rob and Tom today. Rob suggested an approach of building subcomms for gathers followed by broadcasts that I think will work fine. I'll implement it and get back if I think there are still performance issues.<br>
<br><div class="gmail_quote">On Mon, Sep 10, 2012 at 1:55 PM, Dave Goodell <span dir="ltr"><<a href="mailto:goodell@mcs.anl.gov" target="_blank">goodell@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Point-to-point (#1) is probably your best bet if you need memory scalability. I suspect that #2 would be faster under many existing implementations, but even faster algorithms should exist for a direct solution. I'm not sure where #3 would fall relative to the first two.<br>
<br>
You might also be able to express this as a neighborhood collective operation with a graph topology (haven't thought hard about this yet), although those are obviously unoptimized right now. We could look at this as a motivating example for internal MPI optimization techniques.<br>
<br>
Tom's DIY library may also implement this pattern in some way. And if it doesn't, you could see if it's a pattern he feels like supporting: <a href="http://www.mcs.anl.gov/~tpeterka/software.html" target="_blank">http://www.mcs.anl.gov/~tpeterka/software.html</a><br>
<br>
-Dave<br>
<div><div class="h5"><br>
On Sep 10, 2012, at 8:24 AM CDT, Jed Brown wrote:<br>
<br>
> Given an original communicator of size P and S subcomms each of size P/S, I effectively want to do an "Allgather" with the result distributed over each of the S subcomms. Some options include<br>
><br>
> 1. point-to-point<br>
> 2. Allgather, then each process keeps the part that is most relevant to them (limited by memory)<br>
> 3. Creating subcomms for each stratum in the S subcomms, Gather to leader, then Bcast along strata<br>
><br>
> Are there better ways to do this? The simpler "Gather to subcomm" operation is also useful.<br>
><br>
> Background: this redundant redistribution is one approach to relieving coarse grid bottlenecks in multigrid. Making it faster would definitely have a tangible impact on solver performance.<br>
</div></div>> _______________________________________________<br>
> mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
> To manage subscription options or unsubscribe:<br>
> <a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
<br>
_______________________________________________<br>
mpich-discuss mailing list <a href="mailto:mpich-discuss@mcs.anl.gov">mpich-discuss@mcs.anl.gov</a><br>
To manage subscription options or unsubscribe:<br>
<a href="https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss" target="_blank">https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss</a><br>
</blockquote></div><br>