[mpich-discuss] Master/Slave MPI implementation Questions

Ricardo Román Brenes roman.ricardo at gmail.com
Fri Jul 6 11:33:30 CDT 2012


Hello:

1.as far as i know you can define the numbre of processes u can launch in
each node in the node file maybe with that ucan specify what kind of "work
balance" you want.
2.that would be in the code with the rank of each process. ALso you can
specify the master cluster node with, say 1 processes.That would leave the
rest to the OS.

ALso remember that not every parallel program scalates linearly, (most dont
=) )

On Fri, Jul 6, 2012 at 10:28 AM, Sarika K <sarikauniv at gmail.com> wrote:

> Dear MPICH discussion group:
>
> I am running a parallel code with mpich2-1.4.1p1 that uses Master/Slave
> MPI implementation and was originally set up for a cluster of single core
> nodes controlled by a head node.
>
> The code also works well with a single node multiple cores set up.
>
> Now I am testing this code on multiple nodes each with multiple cores. The
> nodes are set up for password less login via ssh. The run hangs sometimes
> and takes longer to complete. (The run with single node 12 cores is faster
> than the corresponding run using  2 nodes each with 12 cores (total 24)).
>
> I am trying to resolve this issue and wondering if you have any feedback
> on:
> 1. Does Master/Slave MPI implementation need any specific settings to work
> across multiple node/multiple core machine set up?
> 2. Is there any way to explicitly specify a core as a master with mpich2
> and exclude it from computations?
>
> I would greatly appreciate any other pointers/suggestions to fix this
> issue.
>
> Thanks,
> Sarika
>
>
>
> _______________________________________________
> mpich-discuss mailing list     mpich-discuss at mcs.anl.gov
> To manage subscription options or unsubscribe:
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20120706/d9280896/attachment-0001.html>


More information about the mpich-discuss mailing list