<div dir="ltr"><br><br><div class="gmail_quote">On Wed, Aug 27, 2008 at 3:20 PM, Jeff Squyres <span dir="ltr"><<a href="mailto:jsquyres@cisco.com">jsquyres@cisco.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
(Note: this thread is separately spanning the two different MPI implementation mailing lists...)<div class="Ih2E3d"><br>
<br>
<br>
On Aug 27, 2008, at 1:51 PM, Robert Kubrick wrote:<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
For mpich2, the internal buffer space is limited by available memory. For each unexpected small message (<=128K for ch3:sock) mpich2 does a malloc and receives the message into that buffer. So even unexpected small messages shouldn't block program flow...but you'll eventually crash if you run out of memory.<br>
</blockquote>
<br>
Good to know.<br>
</blockquote>
<br></div>
Most MPI implementations use a similar strategy.<div class="Ih2E3d"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Yes. If you have a process that sends many small messages, such a logging strings to a spooler process, by reading the MPI standard you're left with the impression that MPI_Send might block until a matching receiving has been posted on the other side.<br>
</blockquote>
<br></div>
You should always write your code to assume that MPI_SEND *will* block. Failure to do so will almost certainly result in "my code runs properly in MPI implementation X, but hangs in MPI implementation Y" (because X and Y provide differing amounts of internal buffer space). This is a common complaint among newbie MPI programmers, but the standard is fairly clear on this point.<div class="Ih2E3d">
<br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
If sender performance is a priority, the solution is to queue those log messages somewhere (either on the sending side or better off on the receiving side) to let the process continue execution. MPI_Isend won't make it because the overhead to manage hundreds of request would probably slow down execution more.<br>
</blockquote>
<br></div>
Maybe, maybe not (I assume you mean Irecv?). With MPI_Irecv, the implementation may receive the message directly into your buffer (vs. an intermediate and then later memcpy). Meaning: assuming that the performance is offset is not necessarily true.</blockquote>
<div><br>So what it all boils down is that the only way to control buffering in the current standard is through the use of multiple MPI_Irecv?<br><br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="Ih2E3d"><br>
<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
If process priority is reversed (sending process has low priority, receiving process high), it's probably better to use MPI_Battach/MPI_Bsend to move the buffering copy overhead to the sender?<br>
</blockquote>
<br></div>
If you have a slow sender and a fast receiver, why not send immediately? (vs. forcing a buffered send, which will almost certainly slow down your overall performance)</blockquote><div><br>If the library implementation is multi-threaded, there might be a slight advantage in buffering messages and continue execution. Then again, if the sender is a low priority process it might make more sense to simply send messages right away as you point out.<br>
</div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><br><font color="#888888">
<br>
-- <br>
Jeff Squyres<br>
Cisco Systems<br>
<br>
</font></blockquote></div><br></div>