[petsc-users] Configuring petsc with MPI on ubuntu quad-core

Barry Smith bsmith at mcs.anl.gov
Thu Feb 3 20:54:58 CST 2011


On Feb 3, 2011, at 8:33 PM, Jed Brown wrote:

> Try telling your MPI to run each process on different sockets, or on the same socket with different caches. This is easy with Open MPI and with MPICH+Hydra. You can simply use taskset for serial jobs.

   We should add this options to the FAQ.html memory bandwidth question for everyone to easily look up.

    Barry

> 
> 
>> On Feb 3, 2011 5:46 PM, "Barry Smith" <bsmith at mcs.anl.gov> wrote:
>> 
>> 
>>   Based on these numbers (that is assuming these numbers are a correct accounting of how much memory bandwidth you can get from the system*) you essentially have a one processor machine that they sold to you as a 8 processor machine for sparse matrix computation. The one core run is using almost all the memory bandwidth, adding more cores in the computation helps very little because it is completely starved for memory bandwidth.
>> 
>>   Barry
>> 
>> * perhaps something in the OS is not configured correctly and thus not allowing access to all the memory bandwidth, but this seems unlikely.
>> 
>> On Feb 3, 2011, at 4:29 PM, Vijay S. Mahadevan wrote:
>> 
>> > Barry,
>> > 
>> > The outputs are attached. I do...
>> 
>> > <basicversion_np1.out><basicversion_np2.out>
>> 
> 



More information about the petsc-users mailing list