[mpich-discuss] FAQ about multicore clusters
Rajeev Thakur
thakur at mcs.anl.gov
Tue Sep 30 13:37:47 CDT 2008
For multicore clusters, you should use the Nemesis channel in MPICH2. In
1.1a1 it is the default; in 1.0.7 you need to configure with
--with-device=ch3:nemesis. Nemesis will use shared memory for communication
within a node and TCP (or other network) across nodes. For how to configure
and run MPD, see Section 5.1.7 "Running MPD on SMPs" of the MPICH2
installation guide.
Rajeev
> -----Original Message-----
> From: owner-mpich-discuss at mcs.anl.gov
> [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Nicholas Yue
> Sent: Thursday, September 25, 2008 7:58 AM
> To: mpich-discuss at mcs.anl.gov
> Subject: [mpich-discuss] FAQ about multicore clusters
>
> Hi,
>
> I wasn't able to search the various mailing list archives so I am
> posting here in the hope that someone can point me to more
> documentation
> or information.
>
> I have used MPICH on and off since version MPICH1 days.
>
> I have recently got hold of a dual quad core Xeon and
> have build the
> multi-threaded version of MPICH2.
>
> In the coming months, I am involved in the deployment of a
> renderfarm (i.e. Gigabit network, not Myrinet type of connectivity)
> which I'd like to run some MPI feasibility tests.
>
> Is there some documentation or information about such hybrid i.e.
> multi-core cluster, specifically, what configure options to use for
> building the source? What would be the way to configure and run mpd?
> e.g. ncpus, etc
>
> As a hypothetical case, let's just say
>
> * 10 Gigabit TCP/IP interconnect
> * Dual Quad Core Xeon per machine/1U
> * 8 Gig RAM
> * 64 machines
>
>
> Regards
>
>
More information about the mpich-discuss
mailing list