[mpich-discuss] Why do predefined MPI_Ops function elementwise in MPI_Accumulate, but not in MPI-1 routines?

Jim Dinan dinan at mcs.anl.gov
Tue Apr 24 14:48:36 CDT 2012


Jeff,

My understanding is that those are communication "functions".  Calling 
them can result in a collective operation -- Barrier(MPI_COMM_WORLD) -- 
or a local operation -- Barrier(MPI_COMM_NULL).

  ~Jim.

On 4/24/12 1:29 PM, Jeff Hammond wrote:
> Yes, I saw "No MPI communication function may be called inside the
> user function," but found this definition ambiguous and therefore
> decided to ignore it.  What does this statementactually mean?  There
> is no list of MPI operations that qualify as communications functions
> and those that do not.  The describes operations as collective, local,
> etc.
>
> Is MPI_Win_lock a communication function?  If yes, what does it
> communicate?  It is only supposed to define the start of an epoch, not
> actually move data.  If no, why can it block on remote activity (for
> an exclusive lock)?
>
> Is MPI_Barrier(MPI_COMM_SELF) a communication function?
> MPI_Reduce_local?  MPI_Cancel?
>
> Maybe someone can point me to the table in the MPI standard where all
> 500+ functions are categorized as communication ones or not.
>
> Jeff
>
> On Mon, Apr 23, 2012 at 9:02 PM, Jed Brown<jedbrown at mcs.anl.gov>  wrote:
>> On Mon, Apr 23, 2012 at 20:54, Jeff Hammond<jhammond at alcf.anl.gov>  wrote:
>>>
>>> If you implement your user-defined reduction using MPI_Accumulate
>>> locally, does that not solve the problem?  In this case, whatever you
>>> can do for MPI_Accumulate is valid for MPI_Reduce, right?
>>>
>>> my_reduction(..)
>>> {
>>> MPI_Win_create(MPI_COMM_SELF,..);
>>> MPI_Accumulate(rank=me,..);
>>> MPI_Win_free(MPI_COMM_SELF,..);
>>> }
>>
>>
>> "No MPI communication function may be called inside the user function."
>>
>> I think that means that these are not allowed, even when performed on
>> MPI_COMM_SELF.
>>
>> Note that it's not the "hard" implementation that I'm complaining about,
>> it's the need to translate between two different MPI_Ops that do exactly the
>> same thing, but one that cannot be used with collectives and one that cannot
>> be used with one-sided.
>>
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>
>
>


More information about the mpich-discuss mailing list