[MPICH] Implementation of MPI_Alltoallw
Rajeev Thakur
thakur at mcs.anl.gov
Tue Sep 27 15:55:46 CDT 2005
Sudarshan,
Posting a bunch of nonblocking sends/recvs allows the system to
schedule the communication, which may or may not be better than a higher
level algorithm explicitly scheduling the communication. There is a lot of
room for experimentation though.
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of
> Sudarshan Raghunathan
> Sent: Monday, September 26, 2005 6:11 PM
> To: Rajeev Thakur
> Cc: mpich-discuss at mcs.anl.gov
> Subject: Re: [MPICH] Implementation of MPI_Alltoallw
>
> Rajeev,
>
> Thank you for your response. I was under the impression that
> non-blocking sends and receives are most useful when one is
> overlapping communication and computation steps. This does not seem to
> be the case in the MPICH implementation of MPI_Alltoallw since there
> is no computation per se - all the information is know a priori.
>
> However, I would guess that if one were using custom data types (like
> vectors or sub arrays), then the creation of the data types could
> perhaps be overlapped with the communication steps (of course, one
> could then not call MPI_Alltoallw directly, but just post the isends
> and irecvs). I think that gives me something to experiment :-)
>
> Regards,
> Sudarshan
>
>
More information about the mpich-discuss
mailing list