[MPICH] Problem when enabling ADIOI_MPE_LOGGING

Anthony Chan chan at mcs.anl.gov
Sat Dec 1 22:42:47 CST 2007



On Sat, 1 Dec 2007, Christina Patrick wrote:

> Hi Anthony,
>
> Thank you very much for the detailed explanation above.
>
> When I logged the IDs  ADIOI_MPE_write_a and  ADIOI_MPE_write_b, it
> gave me the following:
> ADIOI_MPE_write_a     = 604 ADIOI_MPE_write_b     = 605
>
> The file "src/mpi/romio/adio/common/ad_init.c" logs the write events as follows:
> MPE_Describe_state( ADIOI_MPE_write_a, ADIOI_MPE_write_b, "write", "blue" );
>
> Inside jumpshot, I can see a "write" event inside a
> "MPI_File_write_at_all" (light blue block inside a dark blue block).
>
> My questions are:
> 1. What are events 1, 2, 3 & 4 (I logged all the eventIDs in ad_init.c
> and they range from 600 to 617. I don't have any eventIDs numbered 1,
> 2, 3 & 4)?

The eventIDs, ADIOI_MPE_write_a and ADIOI_MPE_write_b, are defined/fetched
from MPE logging libraries.  The exact IDs depend on what are available
in the MPE library.  You shouldn't worry about the exact numbers. 
EventIDs 1,2,3,4 have already assigned by MPE for MPI states.  See 
mpich2/src/mpe2/README's custom logging section and/or 
<mpich2-install-dir>/src/cpilog.c to understand how MPE routines
are used.

> 2. Can I explicity get the timing between events 604 & 605?

You can right click on the "write" state, the light blue rectangle,
in jumpshot to get a drawable popup info box which will show
the duration.  For more info on how to use jumpshot, click the
help button in jumpshot or see

http://www-unix.mcs.anl.gov/perfvis/software/viewers/jumpshot-4/usersguide.html

This is how drawable popup info box looks like:

http://www-unix.mcs.anl.gov/perfvis/software/viewers/jumpshot-4/node20.html

A.Chan

>
> Thanks and Warm Regards,
> Christina.
>
> On Dec 1, 2007 1:11 AM, Anthony Chan <chan at mcs.anl.gov> wrote:
>>
>> Just to be clear, ADIOI_MPE_LOGGING only enables client side logging.
>> Also I would think that a typical MPICH2 build with ADIOI_MPE_LOGGING
>> enabled will cause compilation error (so not sure what you did).
>>
>> In order to get ADIOI_MPE_LOGGING to work correctly, you need to build
>> mpich2 *TWICE* with second build dependent on the first one.  The main
>> reason is that MPICH2 builds MPE after ROMIO/ADIO.  In order words,
>> ADIOI_MPE_LOGGING can't be enabled without a valid set of MPE logging
>> header files. So here are the steps that I would do,
>>
>> 1, untar a new copy of mpich2, say mpich2-1.0.6p1, into directory
>>    <mpich2>
>>
>> 2, build a standard version of mpich2 with all the options that you
>>    normally need using VPATH build.  You don't want to do inpath build
>>    here because we need clean mpich2 source tree for the 2nd build, i.e.
>>
>>    mkdir <build_std>
>>    cd <build_std>
>>    <mpich2>/configure CC=... CXX=... F77=... F90=... \
>>                       --prefix=<install_std>
>>    make
>>    make install
>>
>>    Notice we have the 1st installation of MPICH2 in <install_std>.
>>
>> 3, build another version of mpich2 with an extra CFLAGS that enables
>>    ADIOI_MPE_LOGGING and points to a valid set of MPE logging include
>>    files, i.e. -I<install_std>/include.
>>
>>    mkdir <build_adiolog>
>>    cd <build_adiolog>
>>    <mpich2>/configure CC=... CXX=... F77=... F90=...   \
>>           --prefix=<install_adiolog>                   \
>>           --with-file-system=pvfs2+pvfs+ufs+nfs        \
>>           --with-pvfs2=<pvfs-install-dir>              \
>>           CFLAGS="-DADIOI_MPE_LOGGING -I<install_std>/include"
>>    make
>>    make install
>>
>>    The 2nd installation of MPICH2 will be in <install_adiolog>.
>>
>> 4, now you can compile your MPIO+PVFS application with
>>
>>    <install_adiolog>/bin/mpicc -mpe=mpilog -c foo.c
>>
>>    and link as
>>
>>    <install_adiolog>/bin/mpicc -mpe=mpilog -o <exe> *.o
>>
>>    mpiXX wrappers from <install_adiolog> must be used with -mpe=mpilog
>>    for linking, because libmpich.a in <install_adiolog> has references
>>    to MPE symbols.
>>
>> 5, Now run the <exe> as usual.  This will generate clog2 file with
>>    {read,write,seek,...} nested inside MPIO states.  Then invoke
>>    jumpshot on the clog2, and follows the GUI.
>>
>> ADIOI_MPE_LOGGING isn't meant for general users, so no big effort
>> has been spent to make this easy. Let me know if this works for you
>> though.
>>
>> A.Chan
>>
>>
>> On Fri, 30 Nov 2007, Christina Patrick wrote:
>>
>>> Hi Everybody,
>>>
>>> I want to monitor the time it takes to complete collective I/O in my
>>> application. However, I also want the breakup of the times the
>>> collective call spends in calling the PVFS functions as well as the
>>> communication. I saw some ready made logging present in the files
>>> ad_pvfs2_write.c. The calls PVFS_sys_write were logged by MPE_Log
>>> event() as below (one excerpt of the code):
>>>
>>> -------------------------------
>>> #ifdef ADIOI_MPE_LOGGING
>>>                    MPE_Log_event( ADIOI_MPE_write_a, 0, NULL );
>>> #endif
>>>                    err_flag = PVFS_sys_write(pvfs_fs->object_ref, file_req,
>>>                                              file_offsets, PVFS_BOTTOM,
>>>                                              mem_req,
>>>                                              &(pvfs_fs->credentials),
>>>                                              &resp_io);
>>> #ifdef ADIOI_MPE_LOGGING
>>>                    MPE_Log_event( ADIOI_MPE_write_b, 0, NULL );
>>> #endif
>>> -------------------------------
>>>
>>> I modified the makefiles to add the debug flag ADIOI_MPE_LOGGING and
>>> enabled the logging. However, when I run my application and generate
>>> the clog and slog files, I cannot see these events using jumpshot. I
>>> logged the event ID's when they are created in ad_init.c and they are
>>> as below from a sample run:
>>>
>>> ADIOI_MPE_write_a     = 604 ADIOI_MPE_write_b     = 605
>>>
>>> What am I missing? Is there something that I need to do additionally?
>>> How exactly should I profile my application and which tool should I
>>> use to get the breakup of these times.
>>>
>>> I have never profiled MPI before. I would really appreciate any help
>>> and suggestions that you could offer.
>>>
>>> Thanks and Regards,
>>> Christina.
>>>
>>>
>>
>
>




More information about the mpich-discuss mailing list