[mpich-discuss] Re: MPI_Brecv vs multiple MPI_Irecv

Darius Buntinas buntinas at mcs.anl.gov
Thu Aug 28 10:55:05 CDT 2008



On 08/27/2008 03:51 PM, Robert Kubrick wrote:

>> For mpich2, the internal buffer space is limited by available memory. 
>> For each unexpected small message (<=128K for ch3:sock) mpich2 does a 
>> malloc and receives the message into that buffer.  So even unexpected 
>> small messages shouldn't block program flow...but you'll eventually 
>> crash if you run out of memory.
> 
> Good to know.

Most MPI implementations do things this way, but the thresholds will 
probably be different.  But note that not all implementations will 
necessarily do this.


> 
> Yes. If you have a process that sends many small messages, such a 
> logging strings to a spooler process, by reading the MPI standard you're 
> left with the impression that MPI_Send might block until a matching 
> receiving has been posted on the other side. If sender performance is a 
> priority, the solution is to queue those log messages somewhere (either 
> on the sending side or better off on the receiving side) to let the 
> process continue execution. MPI_Isend won't make it because the overhead 
> to manage hundreds of request would probably slow down execution more.

Right, according to the standard, you cannot depend on any internal 
buffering at the sender or receiver: any MPI_Send can block.  Different 
implementations (or even the same implementation on different 
architectures) will have different thresholds and different buffering 
behavior, so for writing portable applications, you must assume that any 
MPI_Send will block until the matching receive is posted.  Otherwise 
you'll end up with a program that runs fine on one implementation/ 
architecture but mysteriously deadlocks on another.  And that's always 
fun to debug :-).

Unfortunately, providing a guaranteed amount of buffering at the 
receiver is not easy (or may not be possible) to do in all situations on 
all architectures.

-d




More information about the mpich-discuss mailing list