[Darshan-users] darshan 3.3.0 issues

Snyder, Shane ssnyder at mcs.anl.gov
Fri May 21 17:21:04 CDT 2021


Hi Thomas,

For the first issue you mentioned related to Darshan configure failures for your Intel compiler when building with APMPI support, I think this is due to a bug in our configure scripts. It turns out Intel compilers produce warnings rather than errors for one of our tests, which was leading to unexpected results that cause APMPI not to be built. We do most of our testing with GNU compilers, and just didn't catch this unexpected behavior. It should be fixed now -- if you're interested, you could try building directly from our main branch in GitHub (https://github.com/darshan-hpc/darshan) to confirm, but we will be sure to include this fix in our next release.

I will try to reproduce the hang that you reported with ParaStationMPI and get back to you soon.

You are correct that APMPI data is not yet included in our PDF reports, unfortunately. You can only obtain the data as raw text using darshan-parser, or you can analyze it manually using PyDarshan (https://pypi.org/project/darshan/). We are in the process of redesigning our analysis tools to use PyDarshan, and will try to make sure they include information on all instrumentation modules, including APMPI, so hopefully this is more useful in the future. Please let us know if you think there's some information missing from these reports that you would like to see, and we can think about how to include it.

Thanks!
--Shane
________________________________
From: Darshan-users <darshan-users-bounces at lists.mcs.anl.gov> on behalf of Thomas Breuer <t.breuer at fz-juelich.de>
Sent: Wednesday, May 19, 2021 10:33 AM
To: Harms, Kevin <harms at alcf.anl.gov>; darshan-users at lists.mcs.anl.gov <darshan-users at lists.mcs.anl.gov>
Subject: Re: [Darshan-users] darshan 3.3.0 issues


Hi Kevin,

thanks for the quick reply!

1. the IntelMPI version I have mentioned is based on MPICH 3.3. I have attached the log (iimpi_error_log.txt).

2. I have attached the code as well (which just writes a couple of lines to stdout) (hello_world.c).
Compilation command: mpicxx -fopenmp hello_world.c -o hello_world.exe
Hint: With OpenMPI/4.1.0rc2 the configuration with APMPI and execution works. The PDF report seems to be properly created.

3. If I interpret that correctly the data collected by APMPI are not shown yet in the PDF report?
FYI: A couple of years ago I have written a python script that extracts the data from the binary log file with darshan-parser to get the raw data which you use to create the PDF report. I was able to reproduce the statistics shown in the PDF and have added a few more tables which helped us to get a deeper understanding of applications IO at that time. Since I have not touched this script for a long time it might not work anymore. That's why it also interest for me to have a look at what pydarshan is offering.

Thomas

Am 19.05.2021 um 16:46 schrieb Harms, Kevin:

Thomas,

  1. Not sure why the Intel MPI is tripping up on the configure check. I'm assuming it is MPI3 based. Can you send us the config.log output from that one? Maybe we can see why the check fails.

  2. The partial log indicates the log is incorrect, so those parser errors are expected. I don't know why the finalize hangs. Was this a Fortran hello world example? I'm not familiar with ParaStationMPI but since it is based on MPICH, it should work. Can you send the test code and how you built it? We can try it on a system here.

  3. Autoperf can't be disabled at runtime yet. We have a broader plan to add the ability to enable/disable modules during runtime, but not available yet. We have tested AutoPerf with CrayMPI, MPICH3.3 and OpenMPI. The systems we tested on were generic Linux laptop, Cray XC-40 and Nvidia DGX A100. As far as what can be done with APMPI data, we have some python analysis script based on pydarshan.

https://xgitlab.cels.anl.gov/AutoPerf/autoperf/-/blob/master/apmpi/util/apmpi-analysis.py

  The counters are also output by darshan-parser. We are still in the process of building more analysis based on this work.

kevin

________________________________________
From: Darshan-users <darshan-users-bounces at lists.mcs.anl.gov><mailto:darshan-users-bounces at lists.mcs.anl.gov> on behalf of Thomas Breuer <t.breuer at fz-juelich.de><mailto:t.breuer at fz-juelich.de>
Sent: Wednesday, May 19, 2021 6:52 AM
To: darshan-users at lists.mcs.anl.gov<mailto:darshan-users at lists.mcs.anl.gov>
Subject: [Darshan-users] darshan 3.3.0 issues

Dear Darshan Team,

I have installed the latest darshan version (3.3.0) for different MPIs on our HPC JUWELS (https://apps.fz-juelich.de/jsc/hps/juwels/configuration.html) and would like to report two issues:

1. Intel (19.1.3.304) Compiler with IntelMPI/2019.8.254:
- Configure Step fails for the new APMPI feature:
cd darshan-runtime; ./configure --prefix=/path/to/darshan-runtime/3.3.0-iimpi-2020-APMPI --with-mem-align=8 --with-log-path-by-env=DARSHAN_LOG_P
ATH  --with-jobid-env=SLURM_JOBID CC=mpicc --enable-hdf5-mod=$EBROOTHDF5 --enable-apmpi-mod --enable-apmpi-coll-sync
- Error msg: configure: error: APMPI module requires MPI version 3+
- without the new APMPI Options the configure steps ends successfully:
cd darshan-runtime; ./configure --prefix=/p/software/juwels/stages/Devel-2020/software/darshan-runtime/3.3.0-iimpi-2020 --with-mem-align=8 --with-log-path-by-env=DARSHAN_LOG_PATH  --with-jobid-env=SLURM_JOBID CC=mpicc --enable-hdf5-mod=$EBROOTHDF5


2. GCC/9.3.0 Compiler with ParaStationMPI/5.4.7-1 (based on MPICH 3.3.2) (https://github.com/ParaStation/psmpi/):
- darshan-runtime configured with --enable-apmpi-mod --enable-apmpi-coll-sync
- For a simple helloworld code (MPI + OMP) the application seems to be hanging in the MPI_FINALIZE call.
- if I open the *.darshan_partial file with `darshan-parser`, then the following output is printed:
Error: incompatible darshan file.
Error: expected version 3.21, but got
Error: failed to read darshan log file header.
- There are no issues without APMPI.

3. Further questions:
- Is it possible to switch on/off APMPI during runtime?
- Are there any examples available that demonstrate the additional value that can be achieved by using the new AutoPerf feature?
- Can you confirm that APMPI works on none-Cray systems ?

Best regards,
Thomas

--
Thomas Breuer

Division Application Support             Forschungszentrum Jülich GmbH
Jülich Supercomputing Centre (JSC)       Wilhelm-Johnen-Straße
http://www.fz-juelich.de/ias/jsc         52425 Jülich (Germany)
Phone: +49 2461 61-96742 (currently not available via phone)
Email: t.breuer at fz-juelich.de<mailto:t.breuer at fz-juelich.de><mailto:t.breuer at fz-juelich.de><mailto:t.breuer at fz-juelich.de>

-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Frauke Melchior
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------


--
Thomas Breuer

Division Application Support             Forschungszentrum Jülich GmbH
Jülich Supercomputing Centre (JSC)       Wilhelm-Johnen-Straße
http://www.fz-juelich.de/ias/jsc         52425 Jülich (Germany)
Phone: +49 2461 61-96742 (currently not available via phone)
Email: t.breuer at fz-juelich.de<mailto:t.breuer at fz-juelich.de>

-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
Forschungszentrum Juelich GmbH
52425 Juelich
Sitz der Gesellschaft: Juelich
Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
Vorsitzender des Aufsichtsrats: MinDir Volker Rieke
Geschaeftsfuehrung: Prof. Dr.-Ing. Wolfgang Marquardt (Vorsitzender),
Karsten Beneke (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt, Prof. Dr. Frauke Melchior
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/darshan-users/attachments/20210521/2cae9c56/attachment-0001.html>


More information about the Darshan-users mailing list