[MPICH2 Req #3260] Re: [MPICH] About ch3:nemesis.
Bruno Simioni
brunosimioni at gmail.com
Thu Mar 15 15:35:48 CDT 2007
Yes, MPI_Barrier runs perfectly.
There is another question: Is quite normal getting 100 seconds on parallel
proceesing using 10/100 network and 20-30 seconds running over
multiprocessor machine over sock channel and 2-3 seconds over shared memory,
at the same task?
The difference is so big? Thats normal?
Thanks.
On 3/14/07, Rajeev Thakur <thakur at mcs.anl.gov> wrote:
>
> With 200,000 iterations, the sleep(1) probably causes some skew that
> cause the bcast and gather to go out of sync. Another experiment to try is
> to add an MPI_Barrier either at the beginning or end of the loop (for each
> iteration), with the sleep(1) still there.
>
> Rajeev
>
>
> ------------------------------
> *From:* Bruno Simioni [mailto:brunosimioni at gmail.com]
> *Sent:* Wednesday, March 14, 2007 12:54 PM
> *To:* Rajeev Thakur
> *Subject:* Re: [MPICH2 Req #3260] Re: [MPICH] About ch3:nemesis.
>
> Hey Rajeev,
>
> About questions:
>
> Yes, about 200000 iterations.
>
> What happens if there is small numbers of iterations, I'll unable to
> realize the problem. The large number of iteration acumulates the problem.
>
> Dummy computation: I'll put a for() loop later. now the lab is busy. Do
> you believe that the fact of the thread sleep causes that late? 'cause the
> same program running at one only machine is so fast that using the network.
>
> Bruno.
>
> On 3/14/07, Rajeev Thakur < thakur at mcs.anl.gov> wrote:
> >
> > Are you running for a large number of iterations of the for() loop?
> > What happens if you run just 1 iteration or a small number of iterations
> > (say 5)? Also, what happens if you replace the sleep(1) with some dummy
> > computation that takes 1 sec?
> >
> > Rajeev
> >
> >
> > ------------------------------
> > *From:* Bruno Simioni [mailto:brunosimioni at gmail.com]
> > *Sent:* Tuesday, March 13, 2007 9:37 PM
> > *To:* Darius Buntinas
> > *Cc:* mpich2-maint at mcs.anl.gov
> > *Subject:* [MPICH2 Req #3260] Re: [MPICH] About ch3:nemesis.
> > *Importance:* High
> >
> > Hi!
> >
> > Yeah, you're correct. My problem is described by second situation.
> >
> > 3 nodes, with one processor per node, and one process per processor.
> >
> > The program use not MPI_Recv or MPI_Send, but MPI_Gather and MPI_Bcast.
> >
> > if (myid == 0)
> > {
> > stuff...
> > for (...)
> > {
> > /* Receive information from all nodes of
> > communicator. */
> >
> > MPI_Gather(&rx,1,MPI_DOUBLE,&r,1,MPI_DOUBLE,0,MPI_COMM_WORLD,status);\
> > Calculate FR, using Rx of MPI_Gather.
> > /* Send Fr to everybody,. */
> > MPI_Bcast (&fr, nn+1, MPI_DOUBLE, 0,
> > MPI_COMM_WORLD);
> > Calculate something and write file.
> > }
> > MPI_Finalize();
> > }
> > else
> > {
> > stuff...
> > for (...)
> > {
> > /* Send rx to root */
> >
> > MPI_Gather(&rx,1,MPI_DOUBLE,&r,1,MPI_DOUBLE,0,MPI_COMM_WORLD,status);
> >
> > Calculate something
> > Sleep(1); /* For expand the program. In future, i'll
> > change that, replacing that with a for(). See results. */
> > /* Receive FR from root. */
> > MPI_Bcast (&fr, nn+1, MPI_DOUBLE, 0,
> > MPI_COMM_WORLD);
> > Calculate something and write file.
> > }
> > MPI_Finalize();
> > }
> > }
> >
> >
> > Basically, that is the code.
> >
> > And thats the results:
> >
> > I calculate the time of the main for() to estimate and compare the time
> > of parallel programming.
> >
> > the program running on 3 machines - 77 seconds
> > the program running on 3 process under the same machine using sock
> > channel - 109s
> >
> > the program running on 3 machines AND the Sleep(1) line - 1565s
> > the program running on 3 process under the same machine using sock
> > channel AND the Sleep(1) line - 295s
> >
> > How to explain that results?
> >
> > If you do not understand the line Sleep(1), I'll explain. For now, the
> > algoritm is not done yet. A lot of operations if missing. So, to replace
> > that, i put the Sleep(1) time, and test.
> >
> > It appears that, if the node expends a lot of time without of
> > communicating, to turn it on again, takes a lot of time, right?
> >
> > Bruno.
> >
> >
> > On 3/13/07, Darius Buntinas <buntinas at mcs.anl.gov> wrote:
> > >
> > >
> > > Just so I understand, the master is doing something like this:
> > >
> > > MPI_Send(small msg to slave)
> > > MPI_Recv(small answer from slave)
> > >
> > > If the slave does something like this;
> > >
> > > MPI_Recv(small msg from master)
> > > /* no processing */
> > > MPI_Send(small answer to master)
> > >
> > > The time for the master to complete the send and receive is relatively
> > > short. But if the slave does something like this:
> > >
> > > MPI_Recv(small msg from master)
> > > sleep(120)
> > > MPI_Send(small answer to master)
> > >
> > > Then the time for the master to complet the send and receive is much
> > > longer than 120 seconds more than the first case.
> > >
> > > Is this right?
> > >
> > > How many nodes are you using?
> > > How many processors does each node have?
> > > How many processes are running on each node?
> > >
> > > If you can send us the simplest program that demonstrates this
> > > behavior we
> > > can take a look at it.
> > >
> > > Darius
> > >
> > > On Tue, 13 Mar 2007, Bruno Simioni wrote:
> > >
> > > > Hey Darius,
> > > >
> > > > Thank you for your help. That really cleared the concept for me.
> > > >
> > > > Threre is another thing.
> > > >
> > > > That ir related to speed and performance problem.
> > > >
> > > > In my programs, I realize that if I send an TCP packet across
> > > network
> > > > several time, one after one, without any late, the communication
> > > runs
> > > > perfect, but, if some node make some complex computing that take a
> > > piece of
> > > > time, the communication has a great late.
> > > >
> > > > For example:
> > > >
> > > > The master send several times packets to node. The node process some
> > > little
> > > > thing and aswer to master, sending a packet. (ok, the communication
> > > is
> > > > perfect)
> > > >
> > > > The trouble situation:
> > > >
> > > > The master send a packet to node. The node process a long time, and
> > > answer.
> > > > The answer takes the time of processing and another time. A kind of
> > > > overhead. It's sounds like something "halted" the network and when
> > > requestet
> > > > "turn it up" again.
> > > >
> > > > I'm I correct?
> > > >
> > > > I'm using windows XP, and the latest version of MPICH2.
> > > >
> > > > Thanks.
> > > >
> > > > Bruno, from Brazil.
> > > >
> > > > On 3/13/07, Darius Buntinas <buntinas at mcs.anl.gov> wrote:
> > > >>
> > > >>
> > > >> A channel is a communication method. For example, the default
> > > channel,
> > > >> sock, uses tcp sockets for communication, while the shm channel
> > > >> communicates using shared-memory.
> > > >>
> > > >> Nemesis is a channel that uses shared-memory to communicate within
> > > a node,
> > > >> and a network to communicate between nodes. Currently Nemesis
> > > supports
> > > >> tcp, gm, mx, and elan networks. (Eventually these will be
> > > selectable at
> > > >> runtime, but for now the network has to be selected when MPICH2 is
> > >
> > > >> compiled.)
> > > >>
> > > >> Does that help?
> > > >>
> > > >> -d
> > > >>
> > > >> On Tue, 13 Mar 2007, Bruno Simioni wrote:
> > > >>
> > > >> > Hi!
> > > >> >
> > > >> > Can anyone explain to me what channel is, in mpich2? and what
> > > for is
> > > >> that
> > > >> > used?
> > > >> >
> > > >> > The next question: What channel nemesis is?
> > > >> >
> > > >> > thanks.
> > > >> >
> > > >> >
> > > >>
> > > >
> > > >
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > Bruno.
> >
> >
>
>
> --
> Bruno.
>
>
--
Bruno.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20070315/22b4c337/attachment.htm>
More information about the mpich-discuss
mailing list