non collective mpi_allreduce

Mehdi Bostandoost mbostandoust at yahoo.com
Tue Jul 10 22:21:01 CDT 2007


Hi Aron 
  thanks for your comments. 
  I found the MPI_IALLREDUCE (http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.pe.doc/pe_linux42/am107l0016.html )
  but it seems it is the mpi implementation by IBM. it would be great to have the same thing in mpich.
   
  

Aron Ahmadia <aja2111 at columbia.edu> wrote:
  The term you are looking for is non-blocking, a non-collective reduce
is almost an oxymoron.

And no, non-blocking reduces are not anywhere in the MPI Standard,
maybe one of these days.

Your best bet is to write an implementation yourself using MPI_ISEND
and a tree structure which takes advantage of your network topology.

This isn't exactly a question for PETSc though, you might have better
luck on the mpich-users mailing list. You could also look in the
literature to see how people are implementing global asynchronous
codes cleverly. Let me know what you find (feel free to respond to
aja2111 at columbia.edu), I'm interested in this sort of work as well.

~A

On 7/10/07, Mehdi Bostandoost wrote:
> Hi
> I am working on optimizing my code. one of the functions that I used in my
> code is MPI_ALLREDUCE. I want to have overlap between this communcation and
> my computation part of my code. is there any way to work around it?
> (it would be great to have sth like MPI_IALLREDUCE,MPI_WAIT)
>
> Mehdi
>
>
>
> ________________________________
> Get the Yahoo! toolbar and be alerted to new email wherever you're surfing.
>
>



       
---------------------------------
Pinpoint customers who are looking for what you sell. 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070710/1f392f4d/attachment.htm>


More information about the petsc-users mailing list