[mpich-discuss] MPE tracing

Georges Markomanolis georges.markomanolis at ens-lyon.fr
Thu Jun 24 15:13:16 CDT 2010


Dear Anthony,

Thanks for your answers, yes these are easy changes and they will work. 
Thanks a lot.

Best regards,
Georges

chan at mcs.anl.gov wrote:
> ----- "Georges Markomanolis" <georges.markomanolis at ens-lyon.fr> wrote:
>
>   
>> If I understand right the trick with the printf works when the output
>> of the traces is the stdout. Moreover I have changed it also, I save one
>> file per process and I am buffering the data in order to avoiding 
>> writing to the hard disk many times. The overhead is decreased enough
>> with this way. I tried to test also the MPI_Pcontrol in my application
>> but it didn't work because it should be implemented in the library. 
>> Maybe I will try it more if I have time. I don't know/think that it is
>> easy to convert the output of the logging to my format. Of course with
>> tracing libraries I can do it directly by changing the trace_mpi_core.c 
>> and it is really simple. Is there any way for the logging also?
>> For example in the tracing libraries instead of having: [1] Starting 
>> MPI_Send with count = 131220, dest = 0, tag = 1..
>> I save to the trace: p1 send to p0 131220*sizeof(datatype)
>>     
>
> What I have in mind is essentially using the TRACE_PRINTF macro (which
> you've modified to print to a local file instead of stdout) to do the printf
> trick.  Also, the MPI_Pcontrol support I have in mind is essentially
> adding a switch in TRACE_PRINTF macro so the printing can be turn on
> or off by MPI_Pcontrol.  This should be easily done.
>
> MPE logging records trace into a local memory buffer which is periodically
> flushed to a local file.  The local file from each process is then merged
> into the final clog2 file during MPI_Finalize().  You could turn off the
> merging by setting the environment variable, MPE_DELETE_LOCALFILE, to "no"
> (The details is in MPE's README), then you will see local clog2 files.  
>
>   
>> Moreover I am interested in the time for acquiring the traces and with
>> this way the overheads seems to be less than 3%. Finally for the cases
>> that the trace files are really big, I am trying to avoid having only
>> one trace file like in logging. So if it is possible to have one
>> logging file per process and moreover to be able to save it with my format 
>> before writing the normal logging files then it should be nice but
>> maybe 
>> it needs a lot of time compared to the implementation of MPI_Pcontrol
>>
>> for tracing libraries.
>> The programs that I profile use only the MPI_COMM_WORLD but it would
>> be 
>> nice to have the patch for any case.
>>
>> Best regards,
>> Georges
>>
>>
>> chan at mcs.anl.gov wrote:
>>     
>>> The equivalent of customized logging for tracing is to add a 
>>> printf("Starting of block A...") before the selected block of codes
>>> and do another printf("Ending of block A...") after the block of
>>>       
>> codes.
>>     
>>> But that still produces a lot of output.  I think what should be
>>>       
>> done
>>     
>>> is to have MPI_Pcontrol() implemented in the tracing library, so
>>>       
>> that
>>     
>>> one can turn on or off tracing with MPI_Pcontrol.  I think the
>>>       
>> feature
>>     
>>> can be easily added to the trace_mpi_core.c ...  But it is not there
>>>       
>> yet.
>>     
>>> Given the tracing library does not have MPI_Pcontrol support and you
>>>       
>> may
>>     
>>> want to use logging library with MPI_Pcontrol instead.  BTW, are
>>>       
>> the
>>     
>>> MPI programs being profiled uses communicator besides MPI_COMM_WORLD
>>>       
>> ?
>>     
>>> If the answer is yes, your version of logging library may contain
>>> bugs in MPI_Pcontrol when used with MPI communicator functions. 
>>>       
>> Let
>>     
>>> me know, I will give you a patch...
>>>
>>> A.Chan
>>>
>>> ----- "Georges Markomanolis" <george at markomanolis.com> wrote:
>>>
>>>   
>>>       
>>>> Thank you all for your answers,
>>>>
>>>> I used clog2print because I wanted to see only the traces, not the
>>>>         
>>>> visualization as I need to convert the MPE traces into another
>>>>         
>> format
>>     
>>>> that I need. Moreover I changed the output of the tracing library
>>>>         
>> in 
>>     
>>>> order to have the format that I mentioned and it seems to work very
>>>>         
>>>> nice. One more question. Is it possible to have a selective 
>>>> instrumentation with the tracing library? I think something
>>>>         
>> similar
>>     
>>>> with 
>>>> the customized logging but for tracing, I just need to know which
>>>>         
>> MPI
>>     
>>>> calls are executed by a block of code.
>>>>
>>>> Thanks a lot,
>>>> Best regards,
>>>> Georges
>>>>
>>>> chan at mcs.anl.gov wrote:
>>>>     
>>>>         
>>>>> ----- "Georges Markomanolis" <georges.markomanolis at ens-lyon.fr>
>>>>>       
>>>>>           
>>>> wrote:
>>>>     
>>>>         
>>>>>   
>>>>>       
>>>>>           
>>>>>> Dear all,
>>>>>>
>>>>>> I want to ask you about the MPE's libraries. While I use the
>>>>>>         
>>>>>>             
>>>> logging 
>>>>     
>>>>         
>>>>>> library for an application, I dump the clog file with the command
>>>>>>             
>>>>>> clog2print (is the right way to see the content of the files?).
>>>>>>     
>>>>>>         
>>>>>>             
>>>>> No, you should use the command jumpshot to "view" clog2 file.
>>>>>
>>>>>   
>>>>>       
>>>>>           
>>>>>> Although 
>>>>>> my application executes MPI_Bcast commands I can't see them.
>>>>>>             
>> From
>>     
>>>>>>         
>>>>>>             
>>>> the
>>>>     
>>>>         
>>>>>> manual I read that nested calls are not taken into account, like
>>>>>>             
>>>>>> MPI_Bcast. Is this why I can't see any MPI_Bcast? 
>>>>>>     
>>>>>>         
>>>>>>             
>>>>> No. nested call probably is irrelevant here.
>>>>>
>>>>>   
>>>>>       
>>>>>           
>>>>>> Moreover when I link 
>>>>>> with the tracing library I can see the MPI_Bcast commands. So in
>>>>>>         
>>>>>>             
>>>> the 
>>>>     
>>>>         
>>>>>> case that I want to know about all the MPI commands I should use
>>>>>> tracing 
>>>>>> not logging? I don't need the timestamps and the visualization.
>>>>>>     
>>>>>>         
>>>>>>             
>>>>> If you have access to the source code and you don't care about
>>>>>           
>> the
>>     
>>>>>       
>>>>>           
>>>> order
>>>>     
>>>>         
>>>>> of MPI calls being executed, you can simply use "grep" of string
>>>>>       
>>>>>           
>>>> that
>>>>     
>>>>         
>>>>> is prefixed with MPI_, e.g.
>>>>>
>>>>> grep "MPI_[A-Za-z_]*(" *.c
>>>>>
>>>>> If you still need to do logging, you can send me how you
>>>>>       
>>>>>           
>>>> linked/compiled
>>>>     
>>>>         
>>>>> to enable MPE logging (I suspect something wrong there).
>>>>>
>>>>> A.Chan
>>>>>   
>>>>>       
>>>>>           
>>>>>> Thanks a lot,
>>>>>> Best regards,
>>>>>> Georges
>>>>>> _______________________________________________
>>>>>> mpich-discuss mailing list
>>>>>> mpich-discuss at mcs.anl.gov
>>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>>     
>>>>>>         
>>>>>>             
>>>>> _______________________________________________
>>>>> mpich-discuss mailing list
>>>>> mpich-discuss at mcs.anl.gov
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>>>>>
>>>>>       
>>>>>           
>>>       
>
>   



More information about the mpich-discuss mailing list