[mpich-discuss] modifying the round-robin
Benjamin Svetitsky
bqs at julian.tau.ac.il
Thu Dec 11 06:41:11 CST 2008
Thanks, the -1 option indeed gets all 4 processes to run on nodeB. But
then if I start another -n 4 job, it goes to nodeB as well. Is there a
way to get mpd to do load balancing here (namely to send all 4 to the
next node in line) without specifying the node in the mpiexec command?
Ben
Rajeev Thakur wrote:
> Since you are running from node C, MPD will place the first process on node
> C by default. You can turn that feature off with the "-1" option to mpiexec.
>
> Rajeev
>
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of
>> Benjamin Svetitsky
>> Sent: Wednesday, December 10, 2008 2:41 AM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: [mpich-discuss] modifying the round-robin
>>
>> Greetings,
>>
>> I am running MPICH2 on a cluster of four quad-core machines
>> under Linux.
>> If I run a job such as
>>
>> mpiexec -l -n 4 hostname
>>
>> then one process runs on each node, whereas I would prefer
>> that all four run on the same node. I tried modifying
>> mpd.hosts to read:
>>
>> nodeA:4
>> nodeB:4
>> nodeC:4
>> nodeD:4
>>
>> but the result is not what I expected:
>>
>> nodeC% mpiexec -l -n 4 hostname
>> 0: nodeC
>> 3: nodeB
>> 2: nodeB
>> 1: nodeB
>>
>> How can I get the mpd to fill the hosts one by one reliably?
>>
>> Incidentally, the :4 option is not documented in the
>> Installation Guide.
>> I picked it up in the gutter. If it doesn't do this, what
>> DOES it do?
>>
>> Thanks,
>> Ben
>> --
>> Prof. Benjamin Svetitsky Phone: +972-3-640 8870
>> School of Physics and Astronomy Fax: +972-3-640 7932
>> Tel Aviv University E-mail: bqs at julian.tau.ac.il
>> 69978 Tel Aviv, Israel WWW: http://julian.tau.ac.il/~bqs
>>
--
Prof. Benjamin Svetitsky Phone: +972-3-640 8870
School of Physics and Astronomy Fax: +972-3-640 7932
Tel Aviv University E-mail: bqs at julian.tau.ac.il
69978 Tel Aviv, Israel WWW: http://julian.tau.ac.il/~bqs
More information about the mpich-discuss
mailing list