[mpich-discuss] Why do predefined MPI_Ops function elementwise in MPI_Accumulate, but not in MPI-1 routines?
Jeff Hammond
jhammond at alcf.anl.gov
Tue Apr 24 14:55:13 CDT 2012
Yeah, in most cases, I can figure out what the answer _should_ be, but
that doesn't mean the MPI Forum will agree with me 100% of the time.
What I'm really saying here is that it is ambiguous and therefore
unhelpful that the standard makes a strong prescription of use for an
insufficiently specific list of functions.
Jeff
On Tue, Apr 24, 2012 at 2:48 PM, Jim Dinan <dinan at mcs.anl.gov> wrote:
> Jeff,
>
> My understanding is that those are communication "functions". Calling them
> can result in a collective operation -- Barrier(MPI_COMM_WORLD) -- or a
> local operation -- Barrier(MPI_COMM_NULL).
>
> ~Jim.
>
>
> On 4/24/12 1:29 PM, Jeff Hammond wrote:
>>
>> Yes, I saw "No MPI communication function may be called inside the
>> user function," but found this definition ambiguous and therefore
>> decided to ignore it. What does this statementactually mean? There
>> is no list of MPI operations that qualify as communications functions
>> and those that do not. The describes operations as collective, local,
>> etc.
>>
>> Is MPI_Win_lock a communication function? If yes, what does it
>> communicate? It is only supposed to define the start of an epoch, not
>> actually move data. If no, why can it block on remote activity (for
>> an exclusive lock)?
>>
>> Is MPI_Barrier(MPI_COMM_SELF) a communication function?
>> MPI_Reduce_local? MPI_Cancel?
>>
>> Maybe someone can point me to the table in the MPI standard where all
>> 500+ functions are categorized as communication ones or not.
>>
>> Jeff
>>
>> On Mon, Apr 23, 2012 at 9:02 PM, Jed Brown<jedbrown at mcs.anl.gov> wrote:
>>>
>>> On Mon, Apr 23, 2012 at 20:54, Jeff Hammond<jhammond at alcf.anl.gov>
>>> wrote:
>>>>
>>>>
>>>> If you implement your user-defined reduction using MPI_Accumulate
>>>> locally, does that not solve the problem? In this case, whatever you
>>>> can do for MPI_Accumulate is valid for MPI_Reduce, right?
>>>>
>>>> my_reduction(..)
>>>> {
>>>> MPI_Win_create(MPI_COMM_SELF,..);
>>>> MPI_Accumulate(rank=me,..);
>>>> MPI_Win_free(MPI_COMM_SELF,..);
>>>> }
>>>
>>>
>>>
>>> "No MPI communication function may be called inside the user function."
>>>
>>> I think that means that these are not allowed, even when performed on
>>> MPI_COMM_SELF.
>>>
>>> Note that it's not the "hard" implementation that I'm complaining about,
>>> it's the need to translate between two different MPI_Ops that do exactly
>>> the
>>> same thing, but one that cannot be used with collectives and one that
>>> cannot
>>> be used with one-sided.
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>
>>
>>
> _______________________________________________
> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
--
Jeff Hammond
Argonne Leadership Computing Facility
University of Chicago Computation Institute
jhammond at alcf.anl.gov / (630) 252-5381
http://www.linkedin.com/in/jeffhammond
https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond (in-progress)
https://wiki.alcf.anl.gov/old/index.php/User:Jhammond (deprecated)
https://wiki-old.alcf.anl.gov/index.php/User:Jhammond(deprecated)
More information about the mpich-discuss
mailing list