[mpich-discuss] Why do predefined MPI_Ops function elementwise in MPI_Accumulate, but not in MPI-1 routines?

Jeff Hammond jhammond at alcf.anl.gov
Mon Apr 23 20:40:36 CDT 2012


Well that's just silly.  It would seem that everyone who attends the
MPI Forum regularly is a nincompoop :-)

The standard should be amended since clearly one can implement
MPI_Reduce(MPI_SUM, DerivedDatatype) using
MPI_Win_fence+MPI_Accumulate(MPI_SUM, DerivedDatatype)+MPI_Win_fence.

Jeff

On Mon, Apr 23, 2012 at 8:36 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
> MPI_Accumulate allows you to do MPI_SUM on a derived datatype. MPI_Reduce does not. It requires you to create your own reduction operation for derived datatypes. MPI_Accumulate, on the other hand, does not allow user-defined reduction operations at all.
>
> Rajeev
>
>
>
> On Apr 23, 2012, at 8:29 PM, Jeff Hammond wrote:
>
>> I was under the impression that the use for MPI_Accumulate was a
>> proper subset of MPIReduce, etc.  What part of the standard leads you
>> to believe that restrictions pertaining to MPI_Reduce, etc. are
>> stricter than for MPI_Accumulate?
>>
>> Jeff
>>
>> On Mon, Apr 23, 2012 at 7:53 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>>> The standard contains the following statement about MPI_Accumulate, which
>>> can only use predefined operations.
>>>
>>> Each datatype argument must be a predefined datatype or a derived datatype,
>>> where all basic components are of the same predefined datatype. Both
>>> datatype arguments must be constructed from the same predefined datatype.
>>>
>>> Under this restriction, MPI_SUM can be applied to derived datatypes, but it
>>> cannot be used in the same context with MPI_Allreduce and similar. This
>>> causes an ugly interface in which we
>>>
>>> 1. MUST create a new MPI_Op to be used when calling MPI_Allreduce with this
>>> data type.
>>>
>>> 2. Must NOT create a new MPI_Op when used with MPI_Accumulate.
>>>
>>> This causes awkward translation if we have an API that may be implemented in
>>> terms of one-sided operations or may be implemented in terms of collective
>>> reductions (e.g. if we do communication graph analysis and choose an
>>> implementation based on sparsity or the network).
>>>
>>> Is there any way to circumvent this annoyance?
>>>
>>> What was the rationale for not having MPI_SUM work with MPI_Allreduce and
>>> company when the same conditions as specified for MPI_Accumulate are met?
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>
>>
>>
>> --
>> Jeff Hammond
>> Argonne Leadership Computing Facility
>> University of Chicago Computation Institute
>> jhammond at alcf.anl.gov / (630) 252-5381
>> http://www.linkedin.com/in/jeffhammond
>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond (in-progress)
>> https://wiki.alcf.anl.gov/old/index.php/User:Jhammond (deprecated)
>> https://wiki-old.alcf.anl.gov/index.php/User:Jhammond(deprecated)
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss



-- 
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond (in-progress)
https://wiki.alcf.anl.gov/old/index.php/User:Jhammond (deprecated)
https://wiki-old.alcf.anl.gov/index.php/User:Jhammond(deprecated)


More information about the mpich-discuss mailing list