[mpich-discuss] Another question on Bcast - Send - Recv

Hiatt, Dave M dave.m.hiatt at citi.com
Thu Aug 27 13:32:49 CDT 2009


thanks, just wondering.  I'll solve the dilemma by having a final "close down" broadcast and wait for that.  Then everything is sure to be idle on all nodes and in all threads.

-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov]On Behalf Of Dorian Krause
Sent: Thursday, August 27, 2009 1:20 PM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] Another question on Bcast - Send - Recv


Hi Dave,

Just to be sure: You have (let's say) 2 threads per MPI process and
thread 0 posts a MPI_Bcast. Now, thread1 recognizes that he wants to
terminate the program by calling MPI_Finalize before the collectice
broadcast completes. Is this correct?

The MPI standard says that "any non-blocking communications are
(locally) complete" but doesn't say anything about collective. My
feeling is that the paragraph on MPI_Finalize is not updated to cover
threaded MPI ...

Bottom line: I don't have a clue ... Sorry.

Dorian

Hiatt, Dave M wrote:
> If say my Bcast monitor thread, is waiting on a receive, and based on other conditions I determine that no more messages will be broadcast, will MPI::Finalize clean up that outstanding Bcast.  If not, what is the accepted way of terminating it, term the thread?  Or do I have to create a semantic to Bcast out that says, "End".  What's typically done, is there a "best practice?".
>
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov]On Behalf Of Dorian Krause
> Sent: Wednesday, August 26, 2009 4:50 PM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] A bit of clarification on Bcast - Send -
> Recv
>
>
> Hiatt, Dave M wrote:
>
>> Thanks, yes I took it initially that way, but then I read too closely I guess.
>>
>> So if Bcasts' and Recvs' are intermingled, is MPICH2 completely thread safe so that I can have both a Bcast and a Recv outstanding on two different threads for the same process (rank) or do I need to make communicators for the Recvs' that are different than for the Bcasts', what's the best approach?
>>
>>
>
> The MPI standard guarantees that pt-2-pt communication and collective
> calls do not interfer (p. 131). With a standard conforming
> implementation you can use the same communicator.
>
>
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov]On Behalf Of Rajeev Thakur
>> Sent: Wednesday, August 26, 2009 4:34 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: Re: [mpich-discuss] A bit of clarification on Bcast - Send -
>> Recv
>>
>>
>> No you cannot match a broadcast with a receive. Note that the text you
>> mention in Using MPI comes under the section "Common Errors and
>> Misunderstandings", one of which is "Matching MPI_Bcast with MPI_Recv."
>>
>> Rajeev
>>
>>
>>
>>> -----Original Message-----
>>> From: mpich-discuss-bounces at mcs.anl.gov
>>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Hiatt, Dave M
>>> Sent: Wednesday, August 26, 2009 4:19 PM
>>> To: mpich-discuss at mcs.anl.gov
>>> Subject: [mpich-discuss] A bit of clarification on Bcast - Send - Recv
>>>
>>> First, I apologize for "if you node", if A node, sorry I
>>> think a lot faster than I can type (thank goodness for that).
>>>
>>> My question revolves around a statement on Pg 66 of Using
>>> MPI, which led me to believe that I could use MPI::Recv to
>>> receive MPI::Bcast messages.  "An MPI::Recv does not have to
>>> check whether the message it has just received is part of a
>>> broadcast .....".  But in every attempt I have made to us
>>> MPI::Recv, it never responds to a MPI::Bcast.  I have assumed
>>> that I am making some bonehead mistake, but am now totally
>>> frustrated that I have failed to get this to work.
>>> MPI::Bcast on the receiving node works fine as a receiver,
>>> but MPI::Recv does not for broadcast messages.  I am on 1.0.7
>>> of MPICH2 by the way.
>>>
>>> Thanks again
>>> Dave
>>>
>>>
>>> "Premature optimization is the root of all evil" - Donald Knuth
>>> Dave Hiatt
>>> Manager, Market Risk Systems Integration
>>> CitiMortgage, Inc.
>>> 1000 Technology Dr.
>>> Third Floor East, M.S. 55
>>> O'Fallon, MO 63368-2240
>>>
>>> Phone:  636-261-1408
>>> Mobile: 314-452-9165
>>> FAX:    636-261-1312
>>> Email:     Dave.M.Hiatt at citigroup.com
>>>
>>>
>>>
>>>
>>>
>>>
>>>



More information about the mpich-discuss mailing list