[mpich-discuss] MPI_Wait status array and non-blocking sends
Rajeev Thakur
thakur at mcs.anl.gov
Tue Aug 2 12:33:15 CDT 2011
I didn't read your mail carefully. The status is relevant only for the receive. For the send, the only thing you can do is call MPI_Test_cancelled on the status. See pg 54 of the MPI 2.2 standard.
Rajeev
On Aug 2, 2011, at 12:08 PM, Helvio Vairinhos wrote:
> Yes, sorry for not having posted the whole test code before. I had done
> what you suggest, and the status comes out empty. My test code is the
> following:
>
> PROGRAM mpitest
> IMPLICIT NONE
> INCLUDE "mpif.h"
> INTEGER:: ierr, rank, buf, req
> INTEGER,DIMENSION(MPI_STATUS_SIZE):: status
>
> CALL MPI_INIT(ierr)
> CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
>
> IF( rank==0 ) THEN
> buf=23
> CALL MPI_ISEND(buf, 1, MPI_INTEGER, 1, 99, MPI_COMM_WORLD, req, ierr)
> CALL MPI_WAIT(req, status, ierr)
> WRITE(*,*) "MPI_ISEND status:", status
> ELSE IF( rank==1 ) THEN
> buf=0
> CALL MPI_IRECV(buf, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, req, ierr)
> CALL MPI_WAIT(req, status, ierr)
> WRITE(*,*) "MPI_IRECV status:", status
> WRITE(*,*) "Incoming message:", buf
> ENDIF
>
> CALL MPI_FINALIZE(ierr)
>
> STOP
> END PROGRAM mpitest
>
> I compile the code with gfortran, via the mpif90 wrapper from
> MPICH2-1.3.4 (built in the latest Cygwin release):
>
> $ mpif90 mpitest.F90
> $ mpiexec -n 2 ./a.exe
>
> The output result is still empty for MPI_ISEND, and it's o.k. for MPI_IRECV:
>
> MPI_ISEND status: 0 0 0
> 0 0
> MPI_IRECV status: 4 0 0
> 99 0
> Incoming message: 23
>
> Best,
> Helvio.
>
>
> On 02-08-2011 17:38, Rajeev Thakur wrote:
>> Make sure you have declared the status variable as "integer status(MPI_STATUS_SIZE)". If the status is still empty, send us a small test program that demonstrates the error.
>>
>> Rajeev
>>
>>
>> On Aug 2, 2011, at 6:07 AM, Helvio Vairinhos wrote:
>>
>>> Hi,
>>>
>>> I'm new to MPI and I'd like to understand something about non-blocking
>>> sends/receives and MPI_WAIT that I quite don't understand, if not only
>>> for curiosity. I use the MPICH2-1.3.2p1 implementation of MPI for
>>> Cygwin. Consider the following snippet of F90 code, which does basic
>>> non-blocking send and receive, and waits for the completion of each one
>>> of them:
>>>
>>> IF (rank==0) THEN
>>> buf=1
>>> CALL MPI_ISEND(buf, 1, MPI_INTEGER, 1, 99, MPI_COMM_WORLD, req, ierr)
>>> CALL MPI_WAIT(req, status, ierr)
>>> WRITE(*,*) "MPI_ISEND status:", status
>>> ELSE IF (rank==1) THEN
>>> buf=0
>>> CALL MPI_IRECV(buf, 1, MPI_INTEGER, 0, 99, MPI_COMM_WORLD, req, ierr)
>>> CALL MPI_WAIT(req, status, ierr)
>>> WRITE(*,*) "MPI_IRECV status:", status
>>> WRITE(*,*) "Incoming message:", buf
>>> ENDIF
>>>
>>> The output is:
>>>
>>> $ mpif90 mpitest.F90
>>> $ mpiexec -n 2 ./a.exe
>>> MPI_Isend status: 0 0 0
>>> 0 0
>>> MPI_Irecv status: 4 0 0
>>> 99 0
>>> Incoming message: 1
>>>
>>> I don't understand why the MPI_WAIT that waits for the completion of the
>>> non-blocking send does not return a proper STATUS array (the STATUS
>>> array is always returned with zero components, regardless of the sending
>>> mode being basic, synchronous, ready or buffered). I thought that
>>> information like tag, size and destination rank would still be encoded
>>> in STATUS at the sending end, after the send is completed. I know that
>>> STATUS makes more sense in the receiving end, but I just wanted to know
>>> if this is normal, i.e. if the MPI_WAIT always completes a sending
>>> routine and returns a zero STATUS array, or if I'm doing something wrong.
>>>
>>> Thanks,
>>> Helvio
>>> _______________________________________________
>>> mpich-discuss mailing list
>>> mpich-discuss at mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>> _______________________________________________
>> mpich-discuss mailing list
>> mpich-discuss at mcs.anl.gov
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list