[mpich-discuss] Cannot build mpich2-1.0.8p1 (nemesis) with PGI8.0-4 on Linux x86_64
Gus Correa
gus at ldeo.columbia.edu
Wed Apr 1 18:15:57 CDT 2009
Hi Rajeev and list
Rajeev Thakur wrote:
> Gus,
> ssm uses shared memory within a node and TCP across nodes. You can use
> it with the PGI compilers until we get the Nemesis build to work with PGI
> compilers.
>
> Rajeev
>
Thank you for the clarification, Rajeev.
I didn't know ch3:ssm allows TCP across the nodes,
as you and Pavan explained.
I thought it was only shared memory intranode.
I will try it with the PGI compilers.
Gus Correa
---------------------------------------------------------------------
Gustavo Correa
Lamont-Doherty Earth Observatory - Columbia University
Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------
>
>> -----Original Message-----
>> From: mpich-discuss-bounces at mcs.anl.gov
>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Gus Correa
>> Sent: Wednesday, April 01, 2009 4:48 PM
>> To: Mpich Discuss
>> Subject: Re: [mpich-discuss] Cannot build mpich2-1.0.8p1
>> (nemesis) with PGI8.0-4 on Linux x86_64
>>
>> Hi Rajeev, list
>>
>> Rajeev Thakur wrote:
>>>>> We have dual-socket quad-core Opteron processor nodes (8
>>>> cores/node).
>>>>> I am afraid ch3:sock may not be the best choice for this type of
>>>>> "fat" node, where shared memory shortcuts (memcpy ?) may
>> work better
>>>>> than sockets.
>>> You could try the ch3:ssm channel then.
>>>
>>> Rajeev
>>
>> Sure, and I did use ch3:ssm on a standalone multicore
>> workstation with good results.
>>
>> However, for a Beowulf cluster, ch3:nemesis seems to promise
>> the best of all worlds: shared memory communication for
>> intranode, and a TCP/IP mechanism for internode.
>> In addition, a number of codes we run here require more than
>> 8 processes, and will use more than one node at a time, which
>> cannot be done with ch3:ssm.
>>
>> It is also a matter of convenience, otherwise I would have to
>> keep MPICH2 builds for ch3:ssm, ch3:sock, and ch3:nemesis,
>> which combined with different compilers (Gnu, Intel, PGI, and
>> hybrid compiler mixes) would give me too large a number of
>> libraries to build and maintain.
>> I would rather build MPICH2 with the most flexible
>> communication channel, which seems to be nemesis, right?
>>
>> Also, I heard from the MPICH2 pros
>> (i.e. a gentleman called Rajeev Thakur and his team) very
>> convincing arguments to use ch3:nemesis.
>> And I believe them! :)
>>
>> Gus Correa
>> ---------------------------------------------------------------------
>> Gustavo Correa
>> Lamont-Doherty Earth Observatory - Columbia University
>> Palisades, NY, 10964-8000 - USA
>> ---------------------------------------------------------------------
>>
More information about the mpich-discuss
mailing list