[mpich-discuss] Message ordering with mpich

James Dinan dinan at mcs.anl.gov
Tue Mar 8 22:18:16 CST 2011


Hi James,

Apologies for the off the cuff response.  Yes, this is still a partial 
ordering since concurrent operations will not be ordered, but it is the 
strictest ordering you can get from MPI (i.e. it is FIFO across tags and 
communicators).  Using synchronous send essentially switches off any 
buffering that MPI might do to allow regular send operations to proceed 
out of order.  I was also suggesting to use blocking receive (and 
assuming no wildcards), so in the example you gave, P3 would order P0 
and P1's messages based on the order of its Recv operations.

If you want to implement clocks, you could write send/recv wrappers that 
piggyback timestamps on messages that the application sends.  MPI 
datatypes are helpful for this, you can create datatypes that hold the 
timestamp header plus the payload rather than sending MPI_BYTEs.  My 
recollection of stronger orderings is a bit foggy.  I'm happy to discuss 
if anything else in MPI might be helpful in your work.

Best,
  ~Jim.

On 3/8/11 5:55 PM, James Edmondson wrote:
> Does synchronous MPI_Ssend/MPI_Recv really accomplish this? I would
> think that a custom solution would be required for a global ordering,
> one that actually maintains lamport clocks and breaks ties with
> process identifiers, or something similar - like a sequencer, which is
> less efficient because it requires a bottleneck akin to a token
> authority for the next entity in the global total order.
>
> Similarly, causal ordering could be supported with vectorized data
> types to keep track of the causal ordering (current receipt clocks
> from P0, P1, etc.). If you have some links to where the standard
> supports global ordering via MPI_Ssend/MPI_Recv or how this might be
> accomplished without a custom solution, would you mind sharing some
> links? Or are you saying that MPI_Ssend ensures FIFO channels between
> each entity? Because the latter doesn't imply any type of total
> ordering (e.g. if P0 sent a message before P1 sent a message, then P3
> receives and processes P0's message before P1.)
>
> Thanks,
> James Edmondson
> Vanderbilt University ISIS
>
>
> On Tue, Mar 8, 2011 at 4:35 PM, James Dinan<dinan at mcs.anl.gov>  wrote:
>> I think you should be able to achieve this with synchronous
>> MPI_Ssend/MPI_Recv.
>>
>>   ~Jim.
>>
>> On 3/8/11 8:56 AM, h banki wrote:
>>>
>>> Hi,
>>>
>>> I want to send messages in total ordering between processes, how can I
>>> do this? is there any special code?
>>>
>>>
>>>
>>> --- On *Tue, 3/8/11, Darius Buntinas /<buntinas at mcs.anl.gov>/* wrote:
>>>
>>>
>>>     From: Darius Buntinas<buntinas at mcs.anl.gov>
>>>     Subject: Re: [mpich-discuss] Message ordering with mpich
>>>     To: mpich-discuss at mcs.anl.gov
>>>     Date: Tuesday, March 8, 2011, 8:43 AM
>>>
>>>
>>>     In MPI, all messages between two processes sent on the same
>>>     communicator are FIFO ordered.
>>>
>>>     -d
>>>
>>>     On Mar 8, 2011, at 8:30 AM, h banki wrote:
>>>
>>>      >  Hello,
>>>      >
>>>      >  I want to know that is there any way to implement message
>>>     ordering (FIFO, Total, Causal) with mpich?
>>>      >  is there any document about this issue?
>>>      >
>>>      >  Regards,
>>>      >  Banki
>>>      >
>>>      >  _______________________________________________
>>>      >  mpich-discuss mailing list
>>>      >  mpich-discuss at mcs.anl.gov</mc/compose?to=mpich-discuss at mcs.anl.gov>
>>>      >  https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>>     _______________________________________________
>>>     mpich-discuss mailing list
>>>     mpich-discuss at mcs.anl.gov</mc/compose?to=mpich-discuss at mcs.anl.gov>
>>>     https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>
>> _______________________________________________
>> mpich-discuss mailing list
>> mpich-discuss at mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>



More information about the mpich-discuss mailing list