[Darshan-users] Darshan Intel MPI, SLURM, Fortran

Alain REFLOCH alain.refloch at onera.fr
Fri Apr 28 08:03:32 CDT 2017


I try my first test with Darshan (version darshan-3.1.4).
No problem for install
I use Intel compiler v 15 and Intel MPI, only a message at the compilation :

lib/darshan-mpiio.c(230): warning #147: declaration is incompatible with "int
MPI_File_open(MPI_Comm={int}, const char *, int, MPI_Info={int}, MPI_File *)" (declared at line 192
of "/opt/software/common/intel/impi/5.1.3.258/include64/mpio.h")

Resolve by put option : -DHAVE_MPIIO_CONST

I test with the file io-sample.c in the directory darshan-3.1.4/darshan-test
all is ok

For install I have do this :
./configure  --prefix=$HOME/Darshan --with-mem-align=8 --with-log-path=$SCRATCHDIR/Darshan
--with-jobid-env=SLURM_JOB_ID
--with-zlib=/opt/software/occigen/libraries/zlib/1.2.8/intel/17.0/nompi CC='mpiicc -DHAVE_MPIIO_CONST'

I am on a Bull supercomputer with Haswell processor, with slum for the batch. I have use
SLURM_JOB_ID or NONE
It's OK, I have a file in the repository ....../year/months/days.

After I try on my big program. All it's ok with no LD_PRELOAD, but with PRELOAD
I have No message but the job is kill by SLURM without message.
The main is in Fortran,  call to mpi_init and finalize is in Fortran.
But in the code there is a part in C/C++ for the mesh file with MPI_IO (all parts
open, write, read, close for this file are in C/C++), I am interested by the Darshan informations
on this file (no on the ascii files open in Fortran). There is in the code a parallel partitionner
for domains decompostion (ParMetis), there is no recompilation of this part.

My question is : is Darshan ok for this configuration (main in Fortran, partial recompilation
of the MPI_IO part write in C)

bests regards
Alain






-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/darshan-users/attachments/20170428/5f594a3c/attachment.html>


More information about the Darshan-users mailing list