[PETSC #16200] Petsc Performance on Dual- and Quad-core systems

Barry Smith petsc-maint at mcs.anl.gov
Fri May 25 16:08:12 CDT 2007


  Carlos,

   We don't have any particular numbers for these systems. There are
two main things to keep in mind.

1) Ideally the MPI you use will take advantage of the local shared memory
within a node to lower the communication time. MPICH for example can be
compiled with certain options to help this.
2) The memory bandwidth is often shared among several of the cores. Since 
sparse matrices computations are almost totally bounded by memory bandwidth
the most important thing to consider when buying a system like this is how
much totally memory bandwidth does it have and how much is really usable
for each core. Ideally you'd like to see a 6++ gigabytes per second peak
memory bandwith per core.

   Barry


On Wed, 23 May 2007, Carlos Erik Baumann wrote:

>  
> 
> Hello Everyone,
> 
>  
> 
> Do you have any performance number on Petsc solving typical heat
> transfer / laplace / poisson  problems using dual and/or quad-core
> workstations ?
> 
>  
> 
> I am interested in speedup based on problem size, etc.
> 
>  
> 
> Looking forward to your reply.
> 
>  
> 
> Best,
> 
>  
> 
> --Carlos
> 
>  
> 
> Carlos Baumann         Altair Engineering, Inc.
> 
> 
>  
> 
>  
> 
>  
> 
> 




More information about the petsc-users mailing list