[mpich-discuss] General Scalability Question

Robertson, Andrew andrew.robertson at atk.com
Mon Oct 26 13:11:27 CDT 2009

Dave , Pavan,
I spoke with at least one developer, His experience is that given the
amount of data passed in a CFD code, and the amount of memory being
swapped into and out of core, that the interprocessor bandwith gets
swamped. Their benchmarks indicate that you are far better if the
interprocessor comm gets "offloaded" to the net. So 16 2 core boxes
would be faster than 2 16 core boxes, as long as you are using something
like infiniband. 

I am pinging my other vendors to get their take on this

- Andy 

Andrew Robertson P.E.
CFD Analyst
GASL Operations
Tactical Propulsion and Controls
77 Raynor Avenue
Ronkokoma NY 11779
631-737-6100 Ext 120
Fax: 631-588-7023

!! Knowledge and Thoroughness Baby !!

-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov
[mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Pavan Balaji
Sent: Monday, October 26, 2009 11:59 AM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] General Scalability Question

On 10/26/2009 10:56 AM, Hiatt, Dave M wrote:
> I'm on 1.07 so perhaps it's an artifact of that, but we clearly got a

> increase in sockets being opened on node 0 as we added processes.
> Upping it isn't a big deal, but things did get sluggish, so something

> was going on. But again, I understand that 1.1 I think has Nemesis as 
> the default, and perhaps that's much better in this regard.

Yeah, nemesis is much better optimized for multi-core systems. You
should give it a shot.

 -- Pavan

Pavan Balaji
mpich-discuss mailing list
mpich-discuss at mcs.anl.gov

More information about the mpich-discuss mailing list