[Darshan-users] Different builds for each MPI/compiler
Hedges, Richard M.
hedges1 at llnl.gov
Wed Sep 9 12:45:54 CDT 2015
I have a bit of experience to report. I found that if the application that I was trying to study was very sensitive to compiler and MPI versions, that I needed to build Darshan with those elements. Codes not sensitive to compiler and MPI versions, not so sensitive to build of Darshan.
I didn’t dive down to debug the problem, just reporting my observation.
Customer Support and Test - File Systems Project
Development Environment Group - Livermore Computing
Lawrence Livermore National Laboratory
7000 East Avenue, MS L-557
Livermore, CA 94551
v: (925) 423-2699
f: (925) 423-6961
E: richard-hedges at llnl.gov
From: <darshan-users-bounces at lists.mcs.anl.gov<mailto:darshan-users-bounces at lists.mcs.anl.gov>> on behalf of Phil Carns <carns at mcs.anl.gov<mailto:carns at mcs.anl.gov>>
Date: Wednesday, September 9, 2015 at 10:38 AM
To: Chih-Song Kuo <csk at ictgmbh.net<mailto:csk at ictgmbh.net>>, "darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov>" <darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov>>
Subject: Re: [Darshan-users] Different builds for each MPI/compiler
On 09/09/2015 10:55 AM, Chih-Song Kuo wrote:
Von: darshan-users-bounces at lists.mcs.anl.gov<mailto:darshan-users-bounces at lists.mcs.anl.gov> [mailto:darshan-users-bounces at lists.mcs.anl.gov] Im Auftrag von Phil Carns
Gesendet: Mittwoch, 9. September 2015 15:52
An: darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov>
Betreff: Re: [Darshan-users] Different builds for each MPI/compiler
On 09/09/2015 06:55 AM, Chih-Song Kuo wrote:
Hello Darshan user community,
I just started using Darshan. Despite it runs pretty well, I wonder if I have to create different builds for each combination of MPI and compiler. I had this concern because you wrote some issues about libfmpich.so. I as a sales consultant of an HPC-system vendor run benchmarks with many combinations of MPI and compilers to advise our customer the best choice.
Your insight will be greatly appreciated.
I have two pieces of information that might be helpful:
a) The underlying compiler generally doesn't matter as much as the MPI library itself. We usually compile Darshan itself with GNU compilers for maximum compatibility; the resulting Darshan library should be compatible with most any compiler/linker.
b) As for the MPI library, the MPICH ABI compatibility initiative (https://www.mpich.org/abi/) should help in theory by making sure that quite a few of the MPI implementations have a compatible binary interface. Our team hasn't tested this capability first hand, so I can't guarantee it will work, but I think the prospects are good.
Thanks. It looks like I might be worrying too much. According my experience with IPM’s I/O profiling module, different MPIs seem to have some difference in the appearance of the “const” modifier which caused some trouble in the compilation.
This would mainly be an issue for mixing MPI 2 and MPI 3 implementations (const correctness is part of the latter spec). I imagine there would be more significant roadblocks to that level of interoperability though. It's probably best to at least stick to one major version or the other for a given Darshan build.
FWIW Darshan does detect the const issue at compile time (for the Darshan library itself), so you can build a version of Darshan for use with either API generation.
Also, on <https://www.nersc.gov/users/software/debugging-and-profiling/ipm/> https://www.nersc.gov/users/software/debugging-and-profiling/ipm/, there seems to be some module explicitly built for Intel compilers. I thought Darshan uses a similar profiling technique, so I was asking.
I don't believe this is the case. Darshan's various instrumentation methods (link-time wrappers, PMPI, or preloading) are all compiler-independent to my knowledge. IPM captures more in-depth information so it may rely on additional methods.
I read in some Darshan research papers that the tool has been enabled by default on several NERSC clusters. Do the users there have a wide choice of MPI implementations? Do you have any clue which MPIs are chosen by the user and also get successfully profiled by Darshan?
I don't have that information myself, sorry. I know that Darshan works with each of the compilers that are available (and users do switch those for various reasons), but I'm not sure about the MPI implementations.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Darshan-users