[mpich-discuss] Why do predefined MPI_Ops function elementwise in MPI_Accumulate, but not in MPI-1 routines?
Jed Brown
jedbrown at mcs.anl.gov
Mon Apr 23 19:53:30 CDT 2012
The standard contains the following statement about MPI_Accumulate, which
can only use predefined operations.
*Each datatype argument must be a predefined datatype or a derived
datatype, where all basic components are of the same predefined datatype.
Both datatype arguments must be constructed from the same predefined
datatype.*
Under this restriction, MPI_SUM can be applied to derived datatypes, but it
cannot be used in the same context with MPI_Allreduce and similar. This
causes an ugly interface in which we
1. MUST create a new MPI_Op to be used when calling MPI_Allreduce with this
data type.
2. Must NOT create a new MPI_Op when used with MPI_Accumulate.
This causes awkward translation if we have an API that may be implemented
in terms of one-sided operations or may be implemented in terms of
collective reductions (e.g. if we do communication graph analysis and
choose an implementation based on sparsity or the network).
Is there any way to circumvent this annoyance?
What was the rationale for not having MPI_SUM work with MPI_Allreduce and
company when the same conditions as specified for MPI_Accumulate are met?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120423/aed3ff12/attachment.htm>
More information about the mpich-discuss
mailing list