[PETSC #16200] Petsc Performance on Dual- and Quad-core systems

balay at mcs.anl.gov balay at mcs.anl.gov
Fri May 25 16:39:57 CDT 2007


I don't know what the current naming convention is wrt cores is. Its
all messed up. I'll use the following definitions here.

1. CPU=core=processor
2. Dual-core is a short form for 'dual cores per chip.
3. chip: a minimal piece of cpus one can buy.. [the current marketing
might call this CPU - but that donesn't make sense to me]

Anyway back to the topic on hand, one way to look at this issue is: as
long at the memory bandwidth scales up with number of processors
you'll see scalable performance.

However the current machines, the bandwidth doesn't scale with
multiple cores[per chip]. However it might scale with number of
chips. This is true with both AMD and Intel chips.

For eg: - if you have a 2x2 [2 dual core chips] machine - you might
see the best performance for 'mpirun -np 2'.

Satish

On Fri, 25 May 2007, Barry Smith wrote:
> 
>   Carlos,
> 
>    We don't have any particular numbers for these systems. There are
> two main things to keep in mind.
> 
> 1) Ideally the MPI you use will take advantage of the local shared memory
> within a node to lower the communication time. MPICH for example can be
> compiled with certain options to help this.
> 2) The memory bandwidth is often shared among several of the cores. Since 
> sparse matrices computations are almost totally bounded by memory bandwidth
> the most important thing to consider when buying a system like this is how
> much totally memory bandwidth does it have and how much is really usable
> for each core. Ideally you'd like to see a 6++ gigabytes per second peak
> memory bandwith per core.
> 
>    Barry
> 
> 
> On Wed, 23 May 2007, Carlos Erik Baumann wrote:
> 
> >  
> > 
> > Hello Everyone,
> > 
> >  
> > 
> > Do you have any performance number on Petsc solving typical heat
> > transfer / laplace / poisson  problems using dual and/or quad-core
> > workstations ?
> > 
> >  
> > 
> > I am interested in speedup based on problem size, etc.
> > 
> >  
> > 
> > Looking forward to your reply.
> > 
> >  
> > 
> > Best,
> > 
> >  
> > 
> > --Carlos
> > 
> >  
> > 
> > Carlos Baumann         Altair Engineering, Inc.
> > 
> > 
> >  
> > 
> >  
> > 
> >  
> > 
> > 
> 
> 




More information about the petsc-users mailing list