[MPICH] Loopback Communication
Anthony Chan
chan at mcs.anl.gov
Wed Feb 27 20:41:05 CST 2008
1.0.2 is old. Download the latest mpich2 and build it with
--with-device=ch3:nemesis and you should see improvement in
your test.
A.Chan
On Wed, 27 Feb 2008, Elvedin Trnjanin wrote:
> Version: 1.0.2
> Device: ch3:sock
> Configure Options:
>
> I didn't install it and those are the only relevant bits about the
> configuration I could find.
>
> Anthony Chan wrote:
> > Did you configure mpich2 --with-device=ch3:nemesis which uses shared
> > memory for intranode communication ?
> >
> > A.Chan
> >
> > On Wed, 27 Feb 2008, Elvedin Trnjanin wrote:
> >
> >
> >> I have a "ping pong" latency and bandwidth measurement program that I'm
> >> using for approximating performance of an Infiniband cluster. I've
> >> noticed that for self to self asynchronous communication with
> >> mpirun_ssh, to transfer a 1MB message, it takes around 3.2 ms while with
> >> node A to node B communication, the latency is around 2.9 ms per 1MB
> >> message. On the bandwidth side, self to self communication is 60MB/s
> >> slower. Although 60MB/s difference might not seem like a significant
> >> problem considering the total bandwidth for the larger message
> >> Infiniband transfers, on a small Gigabit Ethernet cluster I use, our
> >> node to node asynchronous communication bandwidth is slightly less than
> >> 60MB/s. That difference is pretty significant to me.
> >>
> >> My question is if anyone can explain why self to self communication is
> >> slower and also why the interconnect is involved instead of doing a
> >> memory copy? I apologize ahead of time if this issue has been discussed
> >> in the MPI2 standards but I'm reading the (unofficial) report and
> >> haven't found it yet.
> >>
> >> Regards,
> >> Elvedin Trnjanin
> >>
> >>
> >>
> >
> >
>
>
More information about the mpich-discuss
mailing list