[mpich-discuss] MPI on a cluster of dual-CPU machines

Christina Patrick christina.subscribes at gmail.com
Mon Jul 20 14:38:19 CDT 2009


Thank you very much Dr. Rajeev,

I will keep that in mind,

Regards,
Christina.

On Sat, Jul 18, 2009 at 10:08 AM, Rajeev Thakur<thakur at mcs.anl.gov> wrote:
> The default channel in 1.0.8 is ch3:sock, which will use TCP for
> communication even within a node. If you configure with
> --with-device=ch3:nemesis, it will use shared memory for communication
> within a node and TCP across. In 1.1, nemesis is the default. The way to
> launch processes and MPD configuration hasn't changed between 1.0.8 and
> 1.1.
>
> Rajeev
>
>
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Pavan Balaji
>> Sent: Friday, July 17, 2009 7:09 PM
>> To: mpich-discuss at mcs.anl.gov
>> Subject: Re: [mpich-discuss] MPI on a cluster of dual-CPU machines
>>
>>
>> > Just one more question though. You say that the latest
>> release detects
>> > this configuration. I am using mpich2-1.0.8. Does this version too
>> > detect the presence of dual CPUs automatically?
>>
>> Not with the default configuration. You'll need to configure
>> with --with-device=ch3:nemesis for that. But, it's still
>> advisable that you move to 1.1, since ch3:nemesis has had
>> quite a few fixes and major improvements between 1.0.8 and 1.1.
>>
>>   -- Pavan
>>
>> --
>> Pavan Balaji
>> http://www.mcs.anl.gov/~balaji
>>
>
>


More information about the mpich-discuss mailing list