Hi Pavan,<br>OK, I'm pretty much sorted, MPI job is running about 99% (times 4 servers) vs non MPI single server version even with IO to stdout. I tried ch3:nem and it was 2% slower than sock. I'm using multi-threaded slaves with processor affinity, it's about 5% faster than single threaded slaves.<br>
MPICH2 rocks.. <br>Thanks for the help<br>Colin<br><br><br><br><div class="gmail_quote">On Fri, Jan 21, 2011 at 9:10 PM, Pavan Balaji <span dir="ltr"><<a href="mailto:balaji@mcs.anl.gov">balaji@mcs.anl.gov</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="im"><br>
On 01/21/2011 06:57 AM, Pavan Balaji wrote:<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
2. If I don't use MPI IO and stick to writing to stdout with fwrite() is<br>
it better to do a) Send from slave to master and then fwrite in master<br>
or b) write to stdout in the slaves.<br>
</blockquote>
<br>
All stdout from all processes is always funnelled through the master.<br>
</blockquote>
<br></div>
Oops, sorry, I meant all stdout is funnelled through the mpiexec process, not rank 0 of the application.<div><div></div><div class="h5"><br>
<br>
-- Pavan<br>
<br>
-- <br>
Pavan Balaji<br>
<a href="http://www.mcs.anl.gov/%7Ebalaji" target="_blank">http://www.mcs.anl.gov/~balaji</a><br>
</div></div></blockquote></div><br>