[mpich-discuss] Best way to do "Allgather to subcommunicator"?

Jed Brown jedbrown at mcs.anl.gov
Mon Sep 10 23:42:39 CDT 2012


I talked to Rob and Tom today. Rob suggested an approach of building
subcomms for gathers followed by broadcasts that I think will work fine.
I'll implement it and get back if I think there are still performance
issues.

On Mon, Sep 10, 2012 at 1:55 PM, Dave Goodell <goodell at mcs.anl.gov> wrote:

> Point-to-point (#1) is probably your best bet if you need memory
> scalability.  I suspect that #2 would be faster under many existing
> implementations, but even faster algorithms should exist for a direct
> solution.  I'm not sure where #3 would fall relative to the first two.
>
> You might also be able to express this as a neighborhood collective
> operation with a graph topology (haven't thought hard about this yet),
> although those are obviously unoptimized right now.  We could look at this
> as a motivating example for internal MPI optimization techniques.
>
> Tom's DIY library may also implement this pattern in some way.  And if it
> doesn't, you could see if it's a pattern he feels like supporting:
> http://www.mcs.anl.gov/~tpeterka/software.html
>
> -Dave
>
> On Sep 10, 2012, at 8:24 AM CDT, Jed Brown wrote:
>
> > Given an original communicator of size P and S subcomms each of size
> P/S, I effectively want to do an "Allgather" with the result distributed
> over each of the S subcomms. Some options include
> >
> > 1. point-to-point
> > 2. Allgather, then each process keeps the part that is most relevant to
> them (limited by memory)
> > 3. Creating subcomms for each stratum in the S subcomms, Gather to
> leader, then Bcast along strata
> >
> > Are there better ways to do this? The simpler "Gather to subcomm"
> operation is also useful.
> >
> > Background: this redundant redistribution is one approach to relieving
> coarse grid bottlenecks in multigrid. Making it faster would definitely
> have a tangible impact on solver performance.
> > _______________________________________________
> > mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> > To manage subscription options or unsubscribe:
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120911/4d1c22df/attachment.html>


More information about the mpich-discuss mailing list