[mpich-discuss] hydra_pmi_proxy using 100% CPU

Colin Hercus colin at novocraft.com
Fri Jan 21 09:17:42 CST 2011


Hi Pavan,
OK, I'm pretty much sorted, MPI job is running about 99% (times 4 servers)
vs non MPI single server version even with IO to stdout. I tried ch3:nem and
it was 2% slower than sock.  I'm using multi-threaded slaves with processor
affinity, it's about 5% faster than single threaded slaves.
MPICH2 rocks..
Thanks for the help
Colin



On Fri, Jan 21, 2011 at 9:10 PM, Pavan Balaji <balaji at mcs.anl.gov> wrote:

>
> On 01/21/2011 06:57 AM, Pavan Balaji wrote:
>
>> 2. If I don't use MPI IO and stick to writing to stdout with fwrite() is
>>> it better to do a) Send from slave to master and then fwrite in master
>>> or b) write to stdout in the slaves.
>>>
>>
>> All stdout from all processes is always funnelled through the master.
>>
>
> Oops, sorry, I meant all stdout is funnelled through the mpiexec process,
> not rank 0 of the application.
>
>
>  -- Pavan
>
> --
> Pavan Balaji
> http://www.mcs.anl.gov/~balaji <http://www.mcs.anl.gov/%7Ebalaji>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20110121/be7bb64b/attachment.htm>


More information about the mpich-discuss mailing list