[mpich-discuss] Clarification needed on runningprocessesonMPICH2
Waruna Ranasinghe
warunapww at gmail.com
Sat Nov 29 23:03:14 CST 2008
Yes, I'm sorry if you were miss led by my wordings.
I meant running two instances.
2008/11/30 Rajeev Thakur <thakur at mcs.anl.gov>
> If you run two instances of mpiexec -n 3 tst, you are running a total of
> 6 processes, not 2 or 3.
>
> Rajeev
>
> ------------------------------
> *From:* mpich-discuss-bounces at mcs.anl.gov [mailto:
> mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *Waruna Ranasinghe
> *Sent:* Saturday, November 29, 2008 5:38 AM
>
> *To:* mpich-discuss at mcs.anl.gov
> *Subject:* Re: [mpich-discuss] Clarification needed on
> runningprocessesonMPICH2
>
> >What do you mean by "one process" when it is clearly 3
> >processes are running ? When you say 2 processes of tst,
> >do you mean you launched two instances of
> >"mpiexec -n 3 ./tst" simultaneously.
>
> Yes, I meant of launching two instances of "mpiexec -n 3 ./tst"
> simultaneously.
>
> I could not reach the MPICH cluster until monday, so that I'll send the
> modified programme on Monday.
>
> Thank You,
> Waruna
>
> 2008/11/29 Anthony Chan <chan at mcs.anl.gov>
>
>> >From your Readme file:
>>
>> > mpiexec -n 3 <path>
>> >
>> > output:
>> > when only one process runs
>> >
>> > 0:Total: 499999999500000000 : Time: 9.272023
>> > 2:Total: 499999999500000000 : Time: 10.722239
>> > 1:Total: 499999999500000000 : Time: 11.324907
>> >
>> > When two processes of 'tst' runs in the same time
>> >
>> > 0:Total: 499999999500000000 : Time: 9.538206
>> > 2:Total: 499999999500000000 : Time: 16.045104
>> > 1:Total: 499999999500000000 : Time: 22.400754
>>
>> What do you mean by "one process" when it is clearly 3
>> processes are running ? When you say 2 processes of tst,
>> do you mean you launched two instances of
>> "mpiexec -n 3 ./tst" simultaneously.
>>
>> Could you add the following lines to your tst.cpp
>>
>> char host_name[ MPI_MAX_PROCESSOR_NAME ];
>> int namelen;
>> MPI_Get_processor_name( host_name, &namelen );
>> printf("rank %d running on %s\n", rank, host_name);
>>
>> before first fflush(stdout) to show the location of each process.
>> Rerun your experiments and let us know the result.
>>
>> Also, you can use MPI_Wtime() instead of Duration.cpp that may
>> simplify your test program.
>>
>> A.Chan
>>
>> ----- "Waruna Ranasinghe" <warunapww at gmail.com> wrote:
>>
>> > Hi Rajeev,
>> > Here's test programme. It's just for testing. The results I got, are
>> > in the
>> > read me file in the attachment
>> >
>> > Thank You
>> > Waruna
>> >
>> > 2008/11/27 Rajeev Thakur <thakur at mcs.anl.gov>
>> >
>> > > In that case, what kind of program are you running? Can you send us
>> > a
>> > > small test program.
>> > >
>> > > Rajeev
>> > >
>> > > ------------------------------
>> > > *From:* mpich-discuss-bounces at mcs.anl.gov [mailto:
>> > > mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *Waruna Ranasinghe
>> > > *Sent:* Wednesday, November 26, 2008 9:05 PM
>> > >
>> > > *To:* mpich-discuss at mcs.anl.gov
>> > > *Subject:* Re: [mpich-discuss] Clarification needed on
>> > > runningprocessesonMPICH2
>> > >
>> > > Hi Rajeev,
>> > > There's no doubt that process runs on all 3 machines.
>> > > I have tried cpi example and it prints the hostnames of 3 machines
>> > >
>> > > mpiexec -l -n 3 <path to process>
>> > > (the same path is available in all 3 machines)
>> > >
>> > > Thank you
>> > > Waruna
>> > >
>> > >
>> > > 2008/11/26 Rajeev Thakur <thakur at mcs.anl.gov>
>> > >
>> > >> Make sure the processes are actually running on the 3 machines.
>> > Try the
>> > >> cpi example in the examples directory. It prints out the hostname.
>> > How are
>> > >> you running the job?
>> > >>
>> > >> Rajeev
>> > >>
>> > >> ------------------------------
>> > >> *From:* mpich-discuss-bounces at mcs.anl.gov [mailto:
>> > >> mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *Waruna
>> > Ranasinghe
>> > >> *Sent:* Wednesday, November 26, 2008 11:07 AM
>> > >>
>> > >> *To:* mpich-discuss at mcs.anl.gov
>> > >> *Subject:* Re: [mpich-discuss] Clarification needed on running
>> > >> processesonMPICH2
>> > >>
>> > >> Fedora 8 - MPICH2 one machine with core-2-duo - master
>> > >> two machines with one core
>> > >> all together 3 nodes
>> > >>
>> > >> 2008/11/26 Rajeev Thakur <thakur at mcs.anl.gov>
>> > >>
>> > >>> What kind of environment are you running on (how many machines,
>> > how
>> > >>> many cores each)?
>> > >>>
>> > >>> Rajeev
>> > >>>
>> > >>> ------------------------------
>> > >>> *From:* mpich-discuss-bounces at mcs.anl.gov [mailto:
>> > >>> mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *Waruna
>> > Ranasinghe
>> > >>> *Sent:* Wednesday, November 26, 2008 2:15 AM
>> > >>> *To:* mpich-discuss at mcs.anl.gov
>> > >>> *Subject:* Re: [mpich-discuss] Clarification needed on running
>> > processes
>> > >>> onMPICH2
>> > >>>
>> > >>> Hi Anthony,
>> > >>> I'm sorry, in this case I have to disagree with you.
>> > >>> Because it is not max(t1,t2) but t1+t2 (this is the real result I
>> > got)
>> > >>> I use fflush(stdout) also
>> > >>>
>> > >>> What I think is that, Cluster runs two programmes alternatively.
>> > >>> i.e. Process A runs for a while (say t3 seconds) then process B
>> > for
>> > >>> sometime and so on.
>> > >>> Therefore, ultimately both the processes will run for t1+t2
>> > >>>
>> > >>> Cluster: MPICH2
>> > >>> Fedora 8
>> > >>>
>> > >>> 2008/11/26 Anthony Chan <chan at mcs.anl.gov>
>> > >>>
>> > >>>>
>> > >>>> If process A and B are launched by mpiexec, the time taken
>> > >>>> by mpiexec should be max(t1,t2) not t1 + t2. As Rajeev said,
>> > >>>> calling fflush(stdout) after each printf() is the fastest way
>> > >>>> to get each process's stdout printed to your console.
>> > >>>>
>> > >>>> A.Chan
>> > >>>> ----- "Waruna Ranasinghe" <warunapww at gmail.com> wrote:
>> > >>>>
>> > >>>> > Hi Rajeev,
>> > >>>> > Actually this is not what I'm talking about.
>> > >>>> > Say there are processes A and B
>> > >>>> > Process A utilize 100% of the cpu while it runs. The answer
>> > will
>> > >>>> > appear in
>> > >>>> > t1 seconds
>> > >>>> > Process B also utilize 100% of the cpu while it runs. The
>> > answer will
>> > >>>> > appear
>> > >>>> > in t2 seconds
>> > >>>> >
>> > >>>> > When I run both the process A and B at the same time, The both
>> > answers
>> > >>>> > will
>> > >>>> > appear in t1 + t2 seconds. where as I want to get the Process
>> > A's
>> > >>>> > answer
>> > >>>> > first. (Here Process A is submitted before Process B)
>> > >>>> >
>> > >>>> > Is there anything that I can do to make this happen.
>> > >>>> >
>> > >>>> > Thank You,
>> > >>>> > Waruna
>> > >>>> >
>> > >>>> > 2008/11/26 Rajeev Thakur <thakur at mcs.anl.gov>
>> > >>>> >
>> > >>>> > > If you are refering to the output of "printf", you can try
>> > adding
>> > >>>> > an
>> > >>>> > > fflush(stdout) after the printf. You don't have much control
>> > over
>> > >>>> > the order
>> > >>>> > > in which it is printed from different processes.
>> > >>>> > >
>> > >>>> > > Rajeev
>> > >>>> > >
>> > >>>> > > ------------------------------
>> > >>>> > > *From:* mpich-discuss-bounces at mcs.anl.gov [mailto:
>> > >>>> > > mpich-discuss-bounces at mcs.anl.gov] *On Behalf Of *Waruna
>> > Ranasinghe
>> > >>>> > > *Sent:* Tuesday, November 25, 2008 1:35 AM
>> > >>>> > > *To:* mpich-discuss at mcs.anl.gov
>> > >>>> > > *Subject:* [mpich-discuss] Clarification needed on running
>> > processes
>> > >>>> > on
>> > >>>> > > MPICH2
>> > >>>> > >
>> > >>>> > > Hi all,
>> > >>>> > > I submitted 3 processes at the same time using mpiexec. the
>> > results
>> > >>>> > of each
>> > >>>> > > process appears only after all the processes have finished.
>> > (I guess
>> > >>>> > that
>> > >>>> > > the way mpich schedule it).
>> > >>>> > > What if I want to get the result of the process, which was
>> > submitted
>> > >>>> > first,
>> > >>>> > > first. (like first in first out).
>> > >>>> > >
>> > >>>> > > I use MPICH2 cluster in Fedora 8 (with mpd)
>> > >>>> > >
>> > >>>> > > Thank you.
>> > >>>> > > Waruna Ranasinghe
>> > >>>> > >
>> > >>>> > >
>> > >>>>
>> > >>>
>> > >>>
>> > >>
>> > >
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20081130/d348b9ce/attachment.htm>
More information about the mpich-discuss
mailing list