[MPICH] MPI_Finalize preconditions

Rob Ross rross at mcs.anl.gov
Thu Jan 26 17:38:05 CST 2006


Hi Angel,

You need to do a wait on each request corresponding to an isend so that 
they have locally completed.  As discussed in MPI-1 section 3.7.3, this 
doesn't guarantee that the receiver has received the data, but it is 
enough to allow you to call finalize on the local process.

Barriers have no impact on point-to-point communication completion, so 
calling the barrier doesn't really help the situation.

Rob

Angel Tsankov wrote:
> OK, I think I need to clarify my question, too. So, let's assume that 
> the process has received all incoming messages. It, however, has 
> initiated at least one send operation via a call to MPI_Isend or 
> MPI_Send_init. However, the process has not explicitly completed any of 
> these send operations. Assuming that the program is written correctly, 
> i.e. those send operations have matching receive operations (possibly in 
> other processes), is it safe to call MPI_Finalize right after a call to 
> MPI_Barrier( MPI_COMM_WORLD ) has returned?
> 
> I guess it is also a good idea, if Rob explains what he means by "all 
> sends have completed".
> 
>> Actually I should clarify that.  If the process has received all 
>> incoming messages *and* all sends have completed, then the process is 
>> ready to call finalize.
>>
>> Rob
>>
>> Rob Ross wrote:
>>> Hi Angel,
>>>
>>> If a given process has received all incoming messages, then that 
>>> process is ready to call finalize.  You don't have to perform the 
>>> barrier operation; it's just extra overhead.
>>>
>>> Regards,
>>>
>>> Rob
>>>
>>> Angel Tsankov wrote:
>>>> The MPI standard says:
>>>> "The user must ensure that all pending communications involving a 
>>>> process completes before the process calls MPI_FINALIZE."
>>>>
>>>> If each nonblocking receive call has a matching wait call, does a 
>>>> call to MPI_Barrier( MPI_COMM_WORLD ) in all processes (and after 
>>>> the wait calls) ensure that all communications involving the 
>>>> processes have completed?
>>>
>>
>>
> 




More information about the mpich-discuss mailing list