[mpich-discuss] MPI_Isend/MPI_Irecv on shared memory

Yiannis Papadopoulos giannis.papadopoulos at gmail.com
Fri Dec 2 01:07:50 CST 2011


I see. As I mentioned I cannot share the actual code that I'm encountering this 
issue and writing an example that exposes this issue is not that trivial.

My code is consisted only of Isends/Irecvs. The Irecvs are posted with 
MPI_ANY_SOURCE. The Isends are to random ranks and there are multiple Isends to 
each rank.

I used to have a single queue where I would put the pending MPI_Requests for the 
Isends and MPI_Test them one by one, until one returned false. By changing that 
to a hashtable of queues of MPI_Requests, where the key is the rank, and 
MPI_Testing if the first Isend of each rank has completed, I managed to greatly 
improve the performance (since messages would just get stuck in the single queue 
implementation).

Is the behaviour of MPI_Test documented somewhere or are there any 
benchmarks/models to guide me when it's beneficial to use MPI_Test, MPI_Testsome 
etc? Unfortunately, any blocking operation is strictly forbidden.

Thanks

Pavan Balaji wrote:
>
> This is an artifact of MPI's progress semantics. Posting a bunch of 
> Isends/Irecvs does not guarantee that the data is actually communicated till 
> the next test/wait operation.
>
> Unfortunately, I can't tell much without seeing the code. Is there a small 
> benchmark you can write up that reproduces the problem?
>
>  -- Pavan
>
> On 11/11/2011 01:34 PM, Ioannis Papadopoulos wrote:
>> Hi,
>>
>> I have a program that does a number of MPI_Isends to 4 processes on a
>> quad-core machine. I can see using MPE that the messages are received by
>> the (preposted) MPI_Irecvs only after the MPI_Requests of the MPI_Isends
>> have been MPI_Tested. Is there any rule/preference that changes the
>> priority of how messages are sent inside implementation depending on if
>> MPI_Test/MPI_Wait has been called?
>>
>> I am using MPICH2 1.2.1p1. Unfortunately I cannot share my code, but it
>> will try to create a small example if possible.
>>
>> Thanks
>> _______________________________________________
>> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
>> To manage subscription options or unsubscribe:
>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>



More information about the mpich-discuss mailing list