[MPICH] MPI_Finalize preconditions

Rajeev Thakur thakur at mcs.anl.gov
Thu Jan 26 17:46:59 CST 2006


If you have called MPI_Isend, you need to call MPI_Wait before calling
Finalize.

Rajeev  

> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov 
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Angel Tsankov
> Sent: Thursday, January 26, 2006 5:03 PM
> To: ML: MPICH-discuss post
> Subject: Re: [MPICH] MPI_Finalize preconditions
> 
> OK, I think I need to clarify my question, too. So, let's assume that 
> the process has received all incoming messages. It, however, has 
> initiated at least one send operation via a call to MPI_Isend or 
> MPI_Send_init. However, the process has not explicitly completed any 
> of these send operations. Assuming that the program is written 
> correctly, i.e. those send operations have matching receive 
> operations 
> (possibly in other processes), is it safe to call MPI_Finalize right 
> after a call to MPI_Barrier( MPI_COMM_WORLD ) has returned?
> 
> I guess it is also a good idea, if Rob explains what he means by "all 
> sends have completed".
> 
> > Actually I should clarify that.  If the process has received all 
> > incoming messages *and* all sends have completed, then the process 
> > is ready to call finalize.
> >
> > Rob
> >
> > Rob Ross wrote:
> >> Hi Angel,
> >>
> >> If a given process has received all incoming messages, then that 
> >> process is ready to call finalize.  You don't have to perform the 
> >> barrier operation; it's just extra overhead.
> >>
> >> Regards,
> >>
> >> Rob
> >>
> >> Angel Tsankov wrote:
> >>> The MPI standard says:
> >>> "The user must ensure that all pending communications involving a 
> >>> process completes before the process calls MPI_FINALIZE."
> >>>
> >>> If each nonblocking receive call has a matching wait call, does a 
> >>> call to MPI_Barrier( MPI_COMM_WORLD ) in all processes (and after 
> >>> the wait calls) ensure that all communications involving the 
> >>> processes have completed?
> >>
> >
> > 
> 
> 




More information about the mpich-discuss mailing list