[petsc-users] Configuring petsc with MPI on ubuntu quad-core

Vijay S. Mahadevan vijay.m at gmail.com
Thu Feb 3 21:09:45 CST 2011


I currently have it configured with mpich using --download-mpich. I
have not yet tried the mpich-device option that Satish suggested.

Jed, is there a configure option to include the Hydra manager during
MPI install ? I can also go the OpenMPI route and install the official
Ubuntu distribution to use with Petsc.

On a side-note, I installed some performance monitor tools in ubuntu
(http://manpages.ubuntu.com/manpages/lucid/man1/perf-stat.1.html) and
ran the BasicVersion benchmark with it. Here are the logs.

 Performance counter stats for
'/home/vijay/karma/contrib/petsc/linux-gnu-cxx-opt/bin/mpiexec -n 1
./BasicVersion':

     853.205576  task-clock-msecs         #      0.996 CPUs
            107  context-switches         #      0.000 M/sec
              1  CPU-migrations           #      0.000 M/sec
          12453  page-faults              #      0.015 M/sec
     2981125976  cycles                   #   3494.030 M/sec
     2463421266  instructions             #      0.826 IPC
       33455540  cache-references         #     39.212 M/sec
       30304359  cache-misses             #     35.518 M/sec

    0.856807560  seconds time elapsed

 Performance counter stats for
'/home/vijay/karma/contrib/petsc/linux-gnu-cxx-opt/bin/mpiexec -n 2
./BasicVersion':

    2904.477114  task-clock-msecs         #      1.982 CPUs
            533  context-switches         #      0.000 M/sec
              3  CPU-migrations           #      0.000 M/sec
          24728  page-faults              #      0.009 M/sec
     9904814141  cycles                   #   3410.188 M/sec
     4932342066  instructions             #      0.498 IPC
      108666258  cache-references         #     37.413 M/sec
      105503187  cache-misses             #     36.324 M/sec

    1.465376789  seconds time elapsed

There is clearly something fishy about this. Next I am going to
restart the machine and try the same without the gui to see if the
memory access improves without all the default background processes
running.

Vijay

On Thu, Feb 3, 2011 at 8:54 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> On Feb 3, 2011, at 8:33 PM, Jed Brown wrote:
>
>> Try telling your MPI to run each process on different sockets, or on the same socket with different caches. This is easy with Open MPI and with MPICH+Hydra. You can simply use taskset for serial jobs.
>
>   We should add this options to the FAQ.html memory bandwidth question for everyone to easily look up.
>
>    Barry
>
>>
>>
>>> On Feb 3, 2011 5:46 PM, "Barry Smith" <bsmith at mcs.anl.gov> wrote:
>>>
>>>
>>>   Based on these numbers (that is assuming these numbers are a correct accounting of how much memory bandwidth you can get from the system*) you essentially have a one processor machine that they sold to you as a 8 processor machine for sparse matrix computation. The one core run is using almost all the memory bandwidth, adding more cores in the computation helps very little because it is completely starved for memory bandwidth.
>>>
>>>   Barry
>>>
>>> * perhaps something in the OS is not configured correctly and thus not allowing access to all the memory bandwidth, but this seems unlikely.
>>>
>>> On Feb 3, 2011, at 4:29 PM, Vijay S. Mahadevan wrote:
>>>
>>> > Barry,
>>> >
>>> > The outputs are attached. I do...
>>>
>>> > <basicversion_np1.out><basicversion_np2.out>
>>>
>>
>
>


More information about the petsc-users mailing list