[mpich-discuss] General Scalability Question
Hiatt, Dave M
dave.m.hiatt at citi.com
Mon Oct 26 10:56:45 CDT 2009
I'm on 1.07 so perhaps it's an artifact of that, but we clearly got a increase in sockets being opened on node 0 as we added processes. Upping it isn't a big deal, but things did get sluggish, so something was going on. But again, I understand that 1.1 I think has Nemesis as the default, and perhaps that's much better in this regard.
-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov]On Behalf Of Pavan Balaji
Sent: Monday, October 26, 2009 10:52 AM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] General Scalability Question
On 10/26/2009 10:47 AM, Robertson, Andrew wrote:
> Dave,
> So does that imply you wrote the app from square one to use shared
> memory? Or is that part of how mpi gets invoked. One of my applications
> (GASP) uses lam-mpi. And it appears that I get only one mpi process per
> node with multiple application instances.
>
> Is this accomplished via the use of the "nemisis" or "mt" channel?
Yes. Most MPI implementations do this automatically. You can treat each
core as a different node as far as the application is concerned. The MPI
implementation will internally optimize the rest for you (shared memory
communication within the node and network connection outside).
-- Pavan
--
Pavan Balaji
http://www.mcs.anl.gov/~balaji
_______________________________________________
mpich-discuss mailing list
mpich-discuss at mcs.anl.gov
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list