[Darshan-users] Different builds for each MPI/compiler
csk at ictgmbh.net
Wed Sep 9 09:55:55 CDT 2015
Von: darshan-users-bounces at lists.mcs.anl.gov [mailto:darshan-users-bounces at lists.mcs.anl.gov] Im Auftrag von Phil Carns
Gesendet: Mittwoch, 9. September 2015 15:52
An: darshan-users at lists.mcs.anl.gov
Betreff: Re: [Darshan-users] Different builds for each MPI/compiler
On 09/09/2015 06:55 AM, Chih-Song Kuo wrote:
Hello Darshan user community,
I just started using Darshan. Despite it runs pretty well, I wonder if I have to create different builds for each combination of MPI and compiler. I had this concern because you wrote some issues about libfmpich.so. I as a sales consultant of an HPC-system vendor run benchmarks with many combinations of MPI and compilers to advise our customer the best choice.
Your insight will be greatly appreciated.
I have two pieces of information that might be helpful:
a) The underlying compiler generally doesn't matter as much as the MPI library itself. We usually compile Darshan itself with GNU compilers for maximum compatibility; the resulting Darshan library should be compatible with most any compiler/linker.
b) As for the MPI library, the MPICH ABI compatibility initiative (https://www.mpich.org/abi/) should help in theory by making sure that quite a few of the MPI implementations have a compatible binary interface. Our team hasn't tested this capability first hand, so I can't guarantee it will work, but I think the prospects are good.
Thanks. It looks like I might be worrying too much. According my experience with IPM's I/O profiling module, different MPIs seem to have some difference in the appearance of the "const" modifier which caused some trouble in the compilation. Also, on https://www.nersc.gov/users/software/debugging-and-profiling/ipm/, there seems to be some module explicitly built for Intel compilers. I thought Darshan uses a similar profiling technique, so I was asking.
I read in some Darshan research papers that the tool has been enabled by default on several NERSC clusters. Do the users there have a wide choice of MPI implementations? Do you have any clue which MPIs are chosen by the user and also get successfully profiled by Darshan?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Darshan-users