[mpich-discuss] Message ordering with mpich

James Edmondson jedmondson at gmail.com
Wed Mar 9 01:18:01 CST 2011


Hi Jim,

I was merely trying to clarify the usage of Ssend as it applies to
causal and total ordering, rather than FIFO. Your statement got me
curious because a friend of mine and I wrote a mock implementation of
the MPI 1.0 specification while attending one of Ralph Butler's
graduate courses years ago. I didn't remember anything in the 1.0 spec
and I hadn't heard of anything in the 2.0 spec that would allow for
MPI_Ssend/MPI_Recv to accomplish total ordering. Then again, I am not
entirely sure that the original poster has formulated his problem
thoroughly enough for anyone to give him specific advice.

Thanks for responding and clarifying nonetheless and sorry for adding
to the general confusion.

@Banki, could you tell us a bit more about what you are specifically
trying to implement? As Pavan mentioned, if you know the total order
that should be expected at each process, you can kind of coax total
order out of your MPI program via matched receives. If you are
expecting randomly generated ordering and your program logic is built
to do asynchronous receives with wildcards and for some reason you
need to establish the total order of messages that were sent, this may
require a more customized solution. My gut reaction is that you
probably don't need something quite that elaborate though. A bit more
detail about your problem would be appreciated - if you still
want/need help.

Cheers,
James



On Tue, Mar 8, 2011 at 10:18 PM, James Dinan <dinan at mcs.anl.gov> wrote:
> Hi James,
>
> Apologies for the off the cuff response.  Yes, this is still a partial
> ordering since concurrent operations will not be ordered, but it is the
> strictest ordering you can get from MPI (i.e. it is FIFO across tags and
> communicators).  Using synchronous send essentially switches off any
> buffering that MPI might do to allow regular send operations to proceed out
> of order.  I was also suggesting to use blocking receive (and assuming no
> wildcards), so in the example you gave, P3 would order P0 and P1's messages
> based on the order of its Recv operations.
>
> If you want to implement clocks, you could write send/recv wrappers that
> piggyback timestamps on messages that the application sends.  MPI datatypes
> are helpful for this, you can create datatypes that hold the timestamp
> header plus the payload rather than sending MPI_BYTEs.  My recollection of
> stronger orderings is a bit foggy.  I'm happy to discuss if anything else in
> MPI might be helpful in your work.
>
> Best,
>  ~Jim.
>
> On 3/8/11 5:55 PM, James Edmondson wrote:
>>
>> Does synchronous MPI_Ssend/MPI_Recv really accomplish this? I would
>> think that a custom solution would be required for a global ordering,
>> one that actually maintains lamport clocks and breaks ties with
>> process identifiers, or something similar - like a sequencer, which is
>> less efficient because it requires a bottleneck akin to a token
>> authority for the next entity in the global total order.
>>
>> Similarly, causal ordering could be supported with vectorized data
>> types to keep track of the causal ordering (current receipt clocks
>> from P0, P1, etc.). If you have some links to where the standard
>> supports global ordering via MPI_Ssend/MPI_Recv or how this might be
>> accomplished without a custom solution, would you mind sharing some
>> links? Or are you saying that MPI_Ssend ensures FIFO channels between
>> each entity? Because the latter doesn't imply any type of total
>> ordering (e.g. if P0 sent a message before P1 sent a message, then P3
>> receives and processes P0's message before P1.)
>>
>> Thanks,
>> James Edmondson
>> Vanderbilt University ISIS
>>
>>
>> On Tue, Mar 8, 2011 at 4:35 PM, James Dinan<dinan at mcs.anl.gov>  wrote:
>>>
>>> I think you should be able to achieve this with synchronous
>>> MPI_Ssend/MPI_Recv.
>>>
>>>  ~Jim.
>>>
>>> On 3/8/11 8:56 AM, h banki wrote:
>>>>
>>>> Hi,
>>>>
>>>> I want to send messages in total ordering between processes, how can I
>>>> do this? is there any special code?
>>>>
>>>>
>>>>
>>>> --- On *Tue, 3/8/11, Darius Buntinas /<buntinas at mcs.anl.gov>/* wrote:
>>>>
>>>>
>>>>    From: Darius Buntinas<buntinas at mcs.anl.gov>
>>>>    Subject: Re: [mpich-discuss] Message ordering with mpich
>>>>    To: mpich-discuss at mcs.anl.gov
>>>>    Date: Tuesday, March 8, 2011, 8:43 AM
>>>>
>>>>
>>>>    In MPI, all messages between two processes sent on the same
>>>>    communicator are FIFO ordered.
>>>>
>>>>    -d
>>>>
>>>>    On Mar 8, 2011, at 8:30 AM, h banki wrote:
>>>>
>>>>     >  Hello,
>>>>     >
>>>>     >  I want to know that is there any way to implement message
>>>>    ordering (FIFO, Total, Causal) with mpich?
>>>>     >  is there any document about this issue?
>>>>     >
>>>>     >  Regards,
>>>>     >  Banki
>>>>     >
>>>>     >  _______________________________________________
>>>>     >  mpich-discuss mailing list
>>>>     >
>>>>  mpich-discuss at mcs.anl.gov</mc/compose?to=mpich-discuss at mcs.anl.gov>
>>>>     >  https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>
>>>>    _______________________________________________
>>>>    mpich-discuss mailing list
>>>>    mpich-discuss at mcs.anl.gov</mc/compose?to=mpich-discuss at mcs.anl.gov>
>>>>    https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> mpich-discuss mailing list
>>>> mpich-discuss at mcs.anl.gov
>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>
>
>


More information about the mpich-discuss mailing list