PETSc runs slower on a shared memory machine than on a cluster

Matthew Knepley knepley at
Fri Feb 2 18:20:47 CST 2007

On 2/2/07, Shi Jin <jinzishuai at> wrote:
> > There is a point which is not clear for me.
> >
> > When you run in your shared-memory machine...
> >
> > - Are you running your as a 'sequential' program
> > with a global,shared
> > memory space?
> >
> > - Or are you running it through MPI, as a
> > distributed memory
> > application using MPI message passing (where shared
> > mem is the
> > underlying communication 'channel') ?
> Thank you for replying.
> I run the code on a shared memory machine through MPI,
> just like what I do on a cluster. I simply did:
> petscmpirun -np 18 ./code
> I am not 100% sure whether MPICH-2 will automatically
> use shared memory as the underlying commnunication
> channel instead of the network but I know most MPI
> implementations are smart enough to do so (like
> LAM-MPI I used before). Could anyone confirm this?
> Thank you.

This is missing the point I think. It is just as Satish pointed out.
Sparse matrix multiply is completely dominated by memory bandwidth
and the shared memory machine has contention between the processes.
I guarantee you that the performance problem is in the effective memory
bandwidth per process.


> ____________________________________________________________________________________
> Sucker-punch spam with award-winning protection.
> Try the free Yahoo! Mail Beta.

One trouble is that despite this system, anyone who reads journals widely
and critically is forced to realize that there are scarcely any bars to
publication. There seems to be no study too fragmented, no hypothesis too
trivial, no literature citation too biased or too egotistical, no design too
warped, no methodology too bungled, no presentation of results too
inaccurate, too obscure, and too contradictory, no analysis too
no argument too circular, no conclusions too trifling or too unjustified,
no grammar and syntax too offensive for a paper to end up in print. --
Drummond Rennie
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <>

More information about the petsc-users mailing list