[petsc-dev] MPIX_Iallreduce()
Jed Brown
jedbrown at mcs.anl.gov
Tue Mar 20 00:06:15 CDT 2012
On Sun, Mar 18, 2012 at 11:55, Barry Smith <bsmith at mcs.anl.gov> wrote:
> Add a glance adding all these new complications to PETSc to chase an
> impossible overlap of communication and computation sounds fine :-)
/Q has a dedicated thread to drive asynchronous comm. I've added this, the
call to PetscCommSplitReductionBegin() is entirely optional (does not alter
program semantics), but will allow asynchronous progress to be made.
On conventional systems, there are two choices for driving asynchronous
progress:
1. Set the environment variable MPICH_ASYNC_PROGRESS=1. "Setting that
environment variable will cause a cheesy form of background progress
wherein the library will spawn an additional background thread per MPI
process. You'll have to play around with things, but I'd recommend cutting
your number of processes per node in half to avoid nemesis oversubscription
badness." -- Dave Goodell
2. Make any nontrivial calls into the MPI stack. This could specifically
mean polling a request, but it could also just be your usual communication.
I suspect that a standard MatMult() and PCApply() will be enough to drive a
significant amount of progress on the reduction.
http://petsc.cs.iit.edu/petsc/petsc-dev/rev/d2d98894cb5c
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120320/8f37e4b5/attachment.html>
More information about the petsc-dev
mailing list