[Darshan-users] Darshan and Intel MPI (x86_64)

Phil Carns carns at mcs.anl.gov
Mon Feb 20 15:08:23 CST 2012


The program is calling MPI_INIT and MPI_FINALIZE, though right?  Fortran 
I/O should be captured just fine, but libdarshan.so still relies on the 
MPI initialization and finalization routines to start the tool up and 
write the log file.

Assuming that the code is calling both of the above routines, it may be 
that there is some additional step needed with Intel MPI for it to use a 
library that relies on the MPI profiling (PMPI) interface.  When Darshan 
is preloaded with LD_PRELOAD, it intercepts POSIX (Fortran I/O in this 
case) calls directly, but for the MPI calls it still uses the profiling 
interface to MPI.

Have you tried using any more generic profiling libraries (like mpiP or 
mpitrace) by any chance?

-Phil

On 02/20/2012 02:55 PM, Harr, Cameron Contractor, SAIC wrote:
> Interface is all Fortran calls, not using MPI_IO.
>
> On 02/20/2012 11:48 AM, Kevin Harms wrote:
>>     what interfaces are you using for I/O? Fortran calls, MPI_File_* routines, something else?
>>
>> kevin
>>
>> On Feb 20, 2012, at 1:42 PM, Harr, Cameron Contractor, SAIC wrote:
>>
>>> Thanks Phil. Using the PRELOAD, I can get a sample output with one of the test C programs that comes with the Darshan source; however, I still don't get any output from my executable. Let me ask a basic question: My app is all Fortran; do I need to compile in any support into Darshan for Fortran compatibility? I see that it should support Fortran, but perhaps I'm doing something wrong.
>>>
>>> On 02/20/2012 10:44 AM, Phil Carns wrote:
>>>> I don't have any first-hand experience with Intel MPI, but in principle it should work.
>>>>
>>>> Does your MPI compiler produces dynamic executables by default?  If so, can you try leaving your executable/binary unmodified and load the libdarshan.so library at runtime using the LD_PRELOAD environment variable, rather than directly linking it to your application?  There is some more detail about this approach on this page:
>>>>
>>>> http://wiki.mcs.anl.gov/Darshan/index.php/Dynamic_linking
>>>>
>>>> thanks,
>>>> -Phil
>>>>
>>>> On 02/20/2012 01:28 PM, Harr, Cameron Contractor, SAIC wrote:
>>>>> I'm trying to get Darshan running on a RedHat/X86_64 cluster with a few different flavors of MPI available. I'm most recently trying Intel MPI and gfortran (Intel Compilers didn't seem to work) and after a fair bit of tweaking, got the executable recompiled with libdarshan.so successfully linked in. In my job run script, I verify that libdarshan is linked in and the job runs to completion; however no output is generated. I also tried setting DARSHAN_INTERNAL_TIMING=1, but still no output. Should I be able to use Intel MPI and/or is there a place that documents which MPI stacks are supported?
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Darshan-users mailing list
>>>>>
>>>>> Darshan-users at lists.mcs.anl.gov
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/darshan-users
>>>>
>>>> _______________________________________________
>>>> Darshan-users mailing list
>>>>
>>>> Darshan-users at lists.mcs.anl.gov
>>>> https://lists.mcs.anl.gov/mailman/listinfo/darshan-users
>>> _______________________________________________
>>> Darshan-users mailing list
>>> Darshan-users at lists.mcs.anl.gov
>>> https://lists.mcs.anl.gov/mailman/listinfo/darshan-users
> _______________________________________________
> Darshan-users mailing list
> Darshan-users at lists.mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/darshan-users



More information about the Darshan-users mailing list