[mpich-discuss] measuring time in mpi program
Jain, Rohit
Rohit_Jain at mentor.com
Tue Mar 1 18:59:32 CST 2011
MPE itself doesn’t require huge memory, but clog2 dump file do.
- In our exec run, it goes typically from 2GB to 100GB.
- You require separate build to use MPE.
- Another problem is that Jumpshot doesn't provide fine-grained details related to communication happening, specific to our application. We may need to put more annotation in it.
So, we are looking if there are other better tools available in this area.
We are looking to collect both overall run statistics and also time-based information. And, it does/may require using 2 different tools.
Regards,
Rohit
-----Original Message-----
From: mpich-discuss-bounces at mcs.anl.gov [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Anthony Chan
Sent: Monday, February 28, 2011 2:32 PM
To: mpich-discuss at mcs.anl.gov
Subject: Re: [mpich-discuss] measuring time in mpi program
MPE does not require "huge memory" to dump clog2 file. The MPE memory
buffer per process is 8MB by default but there are environment variables
to change that. I never measure the actual MPE memory overhead, I would
think it is of the order of 10MB, not huge by modern standard. Depends on
how long your program runs, the resulting clog2 file could be very big,
but that is disk space not memory. A simple C program in
<mpich2-installdir>/share/examples_logging/log_cost.c may give you some idea
of the logging overhead.
Depends on what profiling info you are looking for. statistics collection
tool could be very useful. However, statistics being a data reduction tool
throws away a lot of detailed time variation info which could be useful in
some analysis. Statistics collection tool has in general lower overhead than
timeline tracing tools like MPE, it is still not free. Depends on how
sophisticated the collected statistics is, it could impose significant
runtime overhead (ever more so than the logging) in some cases.
A.Chan
----- Original Message -----
> Hi Gurav,
>
> Jumpshot requires building separate exec with -mpe option, takes huge
> memory to dump Clog2 file, and has performance overhead. Ofcourse, it
> gives more information (lot of it is hard to relate to fine-grained
> statistics), but it has its cost too.
>
> So, we have lightweight time statistics collection code, that provides
> what we need for first level debugging.
> I recently came across a tool called IPM, which is supposedly very
> light weight and provide lot of statistics. We haven't used it though
> yet.
>
> Regards,
> Rohit
>
>
> -----Original Message-----
> From: mpich-discuss-bounces at mcs.anl.gov
> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Mandar Gurav
> Sent: Monday, February 28, 2011 1:03 AM
> To: mpich-discuss at mcs.anl.gov
> Subject: Re: [mpich-discuss] measuring time in mpi program
>
> I think using Jumpshot will help you a lot more than you expected.
> Please consult this page and don't bother about Installation of the
> same. It is generally by default installed.
>
> http://www.mcs.anl.gov/research/projects/perfvis/software/viewers/index.htm
>
>
>
> On Fri, Feb 25, 2011 at 8:52 PM, Jain, Rohit <Rohit_Jain at mentor.com>
> wrote:
> > Thanks.
> > Rohit
> >
> > On Feb 25, 2011, at 20:23, "Gus Correa" <gus at ldeo.columbia.edu>
> > wrote:
> >
> >> Gus Correa wrote:
> >>> Jain, Rohit wrote:
> >>>> Is there any performance data on using MPI_WTime()? How much time
> >>>> does
> >>>> it typically take?
> >>>>
> >>>> We have lot of 'wait-send' and 'wait-receive' code at lot of
> >>>> places, and
> >>>> we need to measure time spent in this sections. We currently use
> >>>> gettimeofday() function to measure this time. Would MPI_WTime be
> >>>> faster
> >>>> than that?
> >>>>
> >>>> Regards,
> >>>> Rohit
> >>>>
> >>>>
> >>>> -----Original Message-----
> >>>> From: mpich-discuss-bounces at mcs.anl.gov
> >>>> [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Gus
> >>>> Correa
> >>>> Sent: Friday, February 25, 2011 10:49 AM
> >>>> To: Mpich Discuss
> >>>> Subject: Re: [mpich-discuss] measuring time in mpi program
> >>>>
> >>>> hatem Elshazly wrote:
> >>>>> hey guys,
> >>>>>
> >>>>> does anybody know how to measure execution time of mpi program?
> >>>>> is there any option we can append in the mpirun\mpiexec command
> >>>>> to measure time?
> >>>>> is there any time command that measure clock cycle time,
> >>>>> execution time,etc..?
> >>>>>
> >>>>>
> >>>>> thank u very much
> >>>>> hatem
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>> Hi Hatem
> >>>>
> >>>> The standard outline is something like this:
> >>>>
> >>>> ... initialize your MPI environment ...
> >>>> start=MPI_Wtime()
> >>>> ... do your work (including MPI calls)...
> >>>> end=MPI_Wtime()
> >>>> if (myrank .eq. 0) then
> >>>> print *, "Time spent (in seconds) is: ", end - start
> >>>> endif
> >>>> ... finalize MPI ...
> >>>>
> >>>> (You could also print the results from all ranks, if you want.)
> >>>>
> >>>> I my experience, perfomance tends to be quite variable if you
> >>>> don't have
> >>>> a 'critical mass' of "do your work", i.e., if you do just a
> >>>> couple of send/recv and get out. Works better if your MPI calls
> >>>> are on a loop in "do your work".
> >>>>
> >>>> I hope this helps.
> >>>> Gus Correa
> >>>>
> >>>> PS - It looks like you managed to troubleshoot your network
> >>>> problems.
> >>>> I hope so.
> >>>> You never got back to the list on that subject.
> >>> Hi Rohit
> >>> I did a quick look at src/mpi/timer/mpidtime.c in MPICH2 1.3.1.
> >>> You can check by yourself.
> >>> It seems to me that MPI_WTime wraps gettimeofday.
> >>> (For portability, perhaps?)
> >>> The MPICH2 developers may clarify.
> >>> Gus Correa
> >>
> >> Oops!
> >> I just saw that Pavan already answered your question, Rohit.
> >> Gus Correa
> >>
> >> _______________________________________________
> >> mpich-discuss mailing list
> >> mpich-discuss at mcs.anl.gov
> >> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> > _______________________________________________
> > mpich-discuss mailing list
> > mpich-discuss at mcs.anl.gov
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> >
>
>
>
> --
> Mandar Gurav
> http://www.mandargurav.org
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
_______________________________________________
mpich-discuss mailing list
mpich-discuss at mcs.anl.gov
https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list