[mpich-discuss] MPI_Put completed
Pavan Balaji
balaji at mcs.anl.gov
Fri Jul 6 12:35:19 CDT 2012
Or you can use MPI_Send/Recv for the notification.
-- Pavan
On 07/06/2012 12:28 PM, Jeff Hammond wrote:
> Yep, that's how I would do it :-)
>
> Jeff
>
> On Fri, Jul 6, 2012 at 12:27 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>> The accumulate would have to be done in a separate lock-unlock epoch since it could complete before the put otherwise.
>>
>> Rajeev
>>
>> On Jul 6, 2012, at 12:23 PM, Jeff Hammond wrote:
>>
>>> In that case, I'd use MPI_Accumulate to increment a counter at the
>>> target and have the target poll on it.
>>>
>>> Jeff
>>>
>>> On Fri, Jul 6, 2012 at 12:16 PM, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>>>> But the target doesn't know that the win_unlock has completed on the origin, and that the data is ready to be accessed. You need some other way to indicate that.
>>>>
>>>> In the other two synchronization methods (fence and post-start-complete-wait), the target calls a synchronization function and hence knows when the data transfer is complete.
>>>>
>>>> Rajeev
>>>>
>>>>
>>>>
>>>> On Jul 6, 2012, at 12:09 PM, Jeff Hammond wrote:
>>>>
>>>>> all MPI RMA synchronization operations provide end-to-end completion,
>>>>> rather than local completion, as you might find in ARMCI or SHMEM. if
>>>>> you do MPI_Put then e.g. MPI_Win_unlock, the data is in place and
>>>>> globally visible.
>>>>>
>>>>> jeff
>>>>>
>>>>> On Fri, Jul 6, 2012 at 12:05 PM, Jie Chen <jiechen at mcs.anl.gov> wrote:
>>>>>> For one sided operations such as MPI_Put (based on the lock/unlock mechanism), is there a way for the target process to tell if all the data from the origin process have been filled in the window of the target process? In other words, how does the target process know if the put operation has completed?
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Jie Chen
>>>>>> Mathematics and Computer Science Division
>>>>>> Argonne National Laboratory
>>>>>> Address: 9700 S Cass Ave, Bldg 240, Lemont, IL 60439
>>>>>> Phone: (630) 252-3313
>>>>>> Email: jiechen at mcs.anl.gov
>>>>>> Homepage: http://www.mcs.anl.gov/~jiechen
>>>>>> _______________________________________________
>>>>>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>>>>>> To manage subscription options or unsubscribe:
>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Jeff Hammond
>>>>> Argonne Leadership Computing Facility
>>>>> University of Chicago Computation Institute
>>>>> jhammond at alcf.anl.gov / (630) 252-5381
>>>>> http://www.linkedin.com/in/jeffhammond
>>>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>>>>> _______________________________________________
>>>>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>>>>> To manage subscription options or unsubscribe:
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>
>>>> _______________________________________________
>>>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>>>> To manage subscription options or unsubscribe:
>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>>
>>>
>>> --
>>> Jeff Hammond
>>> Argonne Leadership Computing Facility
>>> University of Chicago Computation Institute
>>> jhammond at alcf.anl.gov / (630) 252-5381
>>> http://www.linkedin.com/in/jeffhammond
>>> https://wiki.alcf.anl.gov/parts/index.php/User:Jhammond
>>> _______________________________________________
>>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>>> To manage subscription options or unsubscribe:
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>> _______________________________________________
>> mpich-discuss mailing list mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
>
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
More information about the mpich-discuss
mailing list