[Darshan-users] Using darshan for non-MPI programs

Kevin Harms harms at alcf.anl.gov
Wed Jun 5 14:05:56 CDT 2013


  I'd like to start with, I'm not saying this is a good idea, just an idea. You could build a shim which wraps main and is listed in the LD_PRELOAD variable along with darshan. The main wrapper would call the PMPI_Init and then call  the real main. You could either call PMPI_Finalize after the real main returns or try using atexit() to register a helper function that calls PMPI_Finalize() to catch cases where applications exit at places other than main. (Or maybe register PMPI_Finalize directly?) I'm also making the assumption you can intercept main which I don't know if that is true. Then in the LD_PRELOAD you list your library, darshan and maybe MPI? although i think the dynamic loader should load MPI when darshan.so is loaded without naming it explicitly.

  something to try anyway.

kevin

On Jun 5, 2013, at 1:22 AM, Michael Kluge <michael.kluge at tu-dresden.de> wrote:

> Hi Julian,
> 
> the problem is that I'm looking for a tool that works with all dynamic binaries, even those, where I don't have the source and that are not using MPI. There is no hope with darshan for this type of binaries?
> 
> 
> Regards, Michael
> 
> On 05.06.2013 00:24, Julian Kunkel wrote:
>> Hi
>> With ld's wrap option I think you can provide a main() replacement which
>> calls (p)mpi_init() then the real main and (p)mpi_finalize() afterwards.
>> 
>> This way you only have to recompile POSIX apps wrapping against you
>> "darshaning" main method.
>> 
>> Regards
>> Julian
>> 
>> -- send with android. Sorry for typos.
>> 
>> Am 04.06.2013 19:03 schrieb "Michael Kluge" <michael.kluge at tu-dresden.de
>> <mailto:michael.kluge at tu-dresden.de>>:
>> 
>>    Hi Kevin,
>> 
>>    if I really want to use darshan for POSIX programs on single hosts,
>>    then it should be fairly easy to emulate the semantics of the MPI
>>    calls through a couple of pthread calls. It might even be possible
>>    to replace the few MPI funktions used in darshan with a generic
>>    abstraction layer and create an MPI and POSIX-only implementation of
>>    that layer. Just thinking ...
>> 
>> 
>>    Regards, Michael
>> 
>>    Am 04.06.2013 18 <tel:04.06.2013%2018>:49, schrieb Kevin Harms:
>> 
>>        Michael,
>> 
>>            Darshan itself needs MPI. It uses some collective calls and
>>        file routines when generating the log. The log is written when
>>        MPI_Finalize is called. If you're going to modify your POSIX
>>        program, then it seems easier to just instrument your main()
>>        with MPI_Init() and MPI_Finalize().
>> 
>>            Using:
>> 
>>        void __attribute__ ((constructor)) PMPI_Init()
>> 
>>            i think would work to initialize darshan, but you still need
>>        to call PMPI_Finalize at some point to get the log.
>> 
>>        kevin
>> 
>>        On Jun 4, 2013, at 8:37 AM, Michael Kluge
>>        <michael.kluge at tu-dresden.de
>>        <mailto:michael.kluge at tu-dresden.de>> wrote:
>> 
>>            Dear list,
>> 
>>            if I understand all the documentation well, darshan collects
>>            profiles of MPI programs only because it will initialize
>>            itself from a wrapper to MPI_Init(). Is this the only case,
>>            why pure POSIX programs (maybe multithreaded) would not work
>>            together with darshan?
>> 
>>            Is there any chance that an approach that uses
>> 
>>            void __attribute__ ((constructor)) my_lib_init()
>>            (see:
>>            http://gcc.gnu.org/onlinedocs/__gcc/Function-Attributes.html
>>            <http://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html>)
>> 
>>            from gcc would work as well for pure POSIX programs?
>> 
>> 
>>            Regards, Michael
>> 
>>            --
>>            Dr.-Ing. Michael Kluge
>> 
>>            Technische Universität Dresden
>>            Center for Information Services and
>>            High Performance Computing (ZIH)
>>            D-01062 Dresden
>>            Germany
>> 
>>            Contact:
>>            Willersbau, Room WIL A 208
>>            Phone: (+49) 351 463-34217 <tel:%28%2B49%29%20351%20463-34217>
>>            Fax: (+49) 351 463-37773 <tel:%28%2B49%29%20351%20463-37773>
>>            e-mail: michael.kluge at tu-dresden.de
>>            <mailto:michael.kluge at tu-dresden.de>
>>            WWW: http://www.tu-dresden.de/zih
>> 
>>            _________________________________________________
>>            Darshan-users mailing list
>>            Darshan-users at lists.mcs.anl.__gov
>>            <mailto:Darshan-users at lists.mcs.anl.gov>
>>            https://lists.mcs.anl.gov/__mailman/listinfo/darshan-users
>>            <https://lists.mcs.anl.gov/mailman/listinfo/darshan-users>
>> 
>> 
>> 
>>    _________________________________________________
>>    Darshan-users mailing list
>>    Darshan-users at lists.mcs.anl.__gov
>>    <mailto:Darshan-users at lists.mcs.anl.gov>
>>    https://lists.mcs.anl.gov/__mailman/listinfo/darshan-users
>>    <https://lists.mcs.anl.gov/mailman/listinfo/darshan-users>
>> 
> 
> -- 
> Dr.-Ing. Michael Kluge
> 
> Technische Universität Dresden
> Center for Information Services and
> High Performance Computing (ZIH)
> D-01062 Dresden
> Germany
> 
> Contact:
> Willersbau, Room A 208
> Phone:  (+49) 351 463-34217
> Fax:    (+49) 351 463-37773
> e-mail: michael.kluge at tu-dresden.de
> WWW:    http://www.tu-dresden.de/zih
> 

-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3110 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/darshan-users/attachments/20130605/faeff09a/attachment.bin>


More information about the Darshan-users mailing list