[mpich-discuss] Using MPICH2 with shared memeory
Darius Buntinas
buntinas at mcs.anl.gov
Tue Apr 13 09:28:08 CDT 2010
Socket connections are created for process management (between the
mpiexec process and MPD processes and between the MPD processes and the
MPI processes). However communication between the MPI processes will
use shared memory. We don't use sysv shared memory because of the
limits on the number and size of segments. Instead we create temporary
files and mmap them.
You should, however, see a significant difference in performance on a
multicore machine. Try a latency benchmark (e.g., osu_latency from the
OMB: http://mvapich.cse.ohio-state.edu/benchmarks ). Using nemesis, you
can force all communication to go through the network by setting the
MPICH_NO_LOCAL environment variable for comparison:
For shared memory performance:
mpiexec -n 2 osu_latency
For sockets performance:
MPICH_NO_LOCAL=1 mpiexec -n 2 osu_latency
Here's a website on tips for measuring shared memory latency:
http://wiki.mcs.anl.gov/mpich2/index.php/Measuring_Nemesis_Performance
-d
On 04/13/2010 06:25 AM, Dai-Hee Kim wrote:
> Hello, everyone
> I am testing MPICH2 on 24 cores SMP machine.
> And, I installed MPICH2 with three devices (nemesis, ssm,
> shm) separately and run parallel program with each device.
> However, it seems that no device was using shared memory when I checked
> the performance
> and network status using netstat command (a lot of sockets created for
> self connection)
> Of course, I could not see any shared memory segments through ipcs -m
> and free commands
> I compiled MPICH2 with three devices by below configuration options
> respectively
> ./configure --prefix=.../nemesis --enable-fast=03 --with-device=ch3:nemesis
> ./configure --prefix=.../ssm --enable-fast=03 --with-device=ch3:ssm
> ./configure --prefix=.../shm --enable-fast=03 --with-device=ch3:shm
> and complied and run the parallel program using scripts (mpif90 and
> mpiexec) in different prefix directory depending on which device I used
> for testing.
> Do I need to put some another options for installing MPICH2 or running
> the parallel program with shared memory?
> Is there anything I missed?
> I really appreciate for your concern.
> Thank you.
>
>
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list