[Darshan-users] compiling statically with intel toolchain

Phil Carns carns at mcs.anl.gov
Tue May 24 10:28:02 CDT 2016


Gnuplot 4.2 should be Ok.  I just confirmed that 4.2 patchlevel 6 works 
with darshan-job-summary.pl on one of our systems.

You could try the following to check what happens to gnuplot when it 
tries to generate the first plot:

- add --verbose to the darshan-job-summary.pl command line
- look for "verbose: /tmp/1FMc4r0alx" (or something similar) in the 
first line of output
- cd to that directory
- run the gnuplot command manually:

     gnuplot -e "data_file='mpiio-access-hist.dat'; graph_title='MPI-IO 
Access Sizes {/Times-Bold=32 \263}'; 
output_file='mpiio-access-hist.eps'" access-hist-eps.gplt

In theory it should run cleanly and create a mpiio-access-hist.eps 
output file.

Thanks for your patience getting this stuff working, by the way. I'm not 
sure what can be done about the Intel MPI Fortran wrapper problem on our 
end, but if there is a gnuplot issue we should be able to work around it 
or at least gracefully detect the problem.

thanks,
-Phil

On 05/24/2016 10:37 AM, Salmon, Rene wrote:
> Hi,
>
> hpcbuild02(salmr0)1060:gnuplot -V
> gnuplot 4.2 patchlevel 3
>
>
>
> Thanks
> Rene
>
>
>
> On 5/24/16, 9:35 AM, "Phil Carns" <carns at mcs.anl.gov> wrote:
>
>> Hi Rene,
>>
>> What version of gnuplot do you have ("gnuplot -V")?
>>
>> Darshan is supposed to have a safety check for the presence of gnuplot
>> and it's version, but the "-e" thing is part of the gnuplot command
>> line.  Maybe something has gone wrong there.
>>
>> thanks,
>> -Phil
>>
>> On 05/24/2016 10:21 AM, Salmon, Rene wrote:
>>> All,
>>>
>>> Thank you very much for all the help and info.  I now have a working binary that does generate a darshan log file.  Your hints where very helpful.  What I ended up doing was writing an mpi_init and mpi_finalize wrapper in C that my fortran code calls and that seems to work and now I can generate a darshan log file.
>>>
>>> I basically just call my_finalize() from my fortran code and that is just this:
>>>
>>> #include "mpi.h"
>>> int my_finalize_(void)
>>> {
>>>     int ier;
>>>     ier = MPI_Finalize();
>>>     return(ier);
>>> }
>>>        
>>>
>>> Now I just want to analyze the results so I run the darshan-job-summary.pl but I am getting these errors.  Any ideas what they mean?
>>>
>>> hpcbuild02(salmr0)1054:darshan-job-summary.pl ./salmr0_ddswriter_id40315_5-24-32872-5756027388706526063_1.darshan
>>> Slowest unique file time: 0.004726
>>> Slowest shared file time: 0.011293
>>> Total bytes read and written by app (may be incorrect): 51934
>>> Total absolute I/O time: 0.016019
>>> **NOTE: above shared and unique file times calculated using MPI-IO timers if MPI-IO interface used on a given file, POSIX timers otherwise.
>>> Cannot open load file '-e'
>>> line 0: util.c: No such file or directory
>>>
>>> EPSTOPDF 2.9.5gw, 2006/01/29 - Copyright 1998-2006 by Sebastian Rahtz et al.
>>> !!! Error: 'posix-access-hist.eps' does not exist!
>>> Cannot open load file '-e'
>>> line 0: util.c: No such file or directory
>>>
>>> EPSTOPDF 2.9.5gw, 2006/01/29 - Copyright 1998-2006 by Sebastian Rahtz et al.
>>> !!! Error: 'file-access-read.eps' does not exist!
>>> EPSTOPDF 2.9.5gw, 2006/01/29 - Copyright 1998-2006 by Sebastian Rahtz et al.
>>> !!! Error: 'file-access-write.eps' does not exist!
>>> EPSTOPDF 2.9.5gw, 2006/01/29 - Copyright 1998-2006 by Sebastian Rahtz et al.
>>> !!! Error: 'file-access-shared.eps' does not exist!
>>> LaTeX generation (phase1) failed [17920], aborting summary creation.
>>> error log:
>>> Overfull \hbox (18.76118pt too wide) in paragraph at lines 52--52
>>>    [][]
>>> <time-summary.pdf, id=1, 289.08pt x 252.945pt> <use time-summary.pdf>
>>> <op-counts.pdf, id=2, 289.08pt x 252.945pt> <use op-counts.pdf>
>>>
>>> LaTeX Warning: File `posix-access-hist.pdf' not found on input line 68.
>>>
>>>
>>> !pdfTeX error: pdflatex (file posix-access-hist.pdf): cannot find image file
>>>    ==> Fatal error occurred, no output PDF file produced!
>>>
>>>
>>> Thanks
>>> Rene
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 5/23/16, 5:21 PM, "Harms, Kevin" <harms at alcf.anl.gov> wrote:
>>>
>>>> Rene,
>>>>
>>>>    Your code is making the mpi_init and mpi_finalize call from Fortran correct? You are still hitting the same issue as you noted before which is that the Darshan doesn’t intercept those calls with the Intel MPI implementation. You could specifically add the darshan calls to your Fortran code manually. It isn’t pretty but it should work. This will initialize the library and then it should capture all the calls from your I/O code since that is all in C.
>>>>
>>>>    call mpi_init()
>>>>    call darshan_core_initialize(%VAL(0), %VAL(0));
>>>>>>>>    call darshan_core_shutdown();
>>>>    call mpi_finalize()
>>>>
>>>>    depending on the function names, you may need to add an underscore to the end of the names.
>>>>
>>>> kevin
>>>>
>>>>> Hi Phil,
>>>>>
>>>>> Thanks for the info that was helpful.  I did manage to link in darshan and MPI statically I had to put in the “lib.a” files by hand as the “darshan-config –pre/post” scripts where still trying to link in darshan dynamically.
>>>>> Here is the link line I used that seem to generate a binary.
>>>>>
>>>>>
>>>>> mpiifort  -DLinux -Dx86_64 -DINTEL -DCompLevel16 -DX86_64 -D__x86_64__ -static-intel -static_mpi -DOpenMP -openmp -DSWAP -O3 -g -o /hpcdata/salmr0/mspain/usp2hdf5/dds/bin/Linux/3.0/x86_64/ddswriter.revUnversioned /hpc/tstapps/intel/x86_64/ics-2016.update.1/compilers_and_libraries_2016.1.150/linux/mpi/intel64/lib/libmpifort.a -L/hpc/tstapps/SLES/3.0/x86_64/darshan/3.0.0/lib /hpc/tstapps/SLES/3.0/x86_64/darshan/3.0.0/lib/libdarshan.a -lz -Wl,@/hpc/tstapps/SLES/3.0/x86_64/darshan/3.0.0/share/ld-opts/darshan-base-ld-opts svn_version.o PartitionLayout_mod.o ddswriter.o -L/tstapps/asi/src/dds/lib/Linux/3.0/x86_64/Intel-16.0.1.150MPI5.1.2MKL11.3.1 -ldds_r3 -lgio -lfhost -L/hpc/tstapps/SLES/3.0/x86_64/darshan/3.0.0/lib -Wl,--start-group /hpc/tstapps/SLES/3.0/x86_64/darshan/3.0.0/lib/libdarshan.a /hpc/tstapps/SLES/3.0/x86_64/darshan/3.0.0/lib/libdarshan-stubs.a -Wl,--end-group -lz
>>>>>
>>>>>
>>>>>
>>>>> Here is what the binary look like:
>>>>>
>>>>> hpcbuild02(salmr0)1181:ldd bin/Linux/3.0/x86_64/ddswriter
>>>>> 	linux-vdso.so.1 =>  (0x00007ffe791a7000)
>>>>> 	libz.so.1 => /lib64/libz.so.1 (0x00002aafaab7c000)
>>>>> 	libdl.so.2 => /lib64/libdl.so.2 (0x00002aafaad92000)
>>>>> 	librt.so.1 => /lib64/librt.so.1 (0x00002aafaaf96000)
>>>>> 	libpthread.so.0 => /lib64/libpthread.so.0 (0x00002aafab1a0000)
>>>>> 	libm.so.6 => /lib64/libm.so.6 (0x00002aafab3bd000)
>>>>> 	libc.so.6 => /lib64/libc.so.6 (0x00002aafab636000)
>>>>> 	/lib64/ld-linux-x86-64.so.2 (0x00002aafaa95a000)
>>>>> 	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002aafab9b3000)
>>>>> hpcbuild02(salmr0)1182:nm bin/Linux/3.0/x86_64/ddswriter|grep darshan
>>>>> 0000000000482e90 T darshan_clean_file_path
>>>>> 0000000000483210 t darshan_common_val_compare
>>>>> 00000000004830c0 T darshan_common_val_counter
>>>>> 00000000004831a0 t darshan_common_val_walker
>>>>> 000000000101e860 b darshan_core
>>>>> 0000000000463f50 t darshan_core_cleanup
>>>>> 00000000004655e0 T darshan_core_initialize
>>>>> 0000000000fcfa80 d darshan_core_mutex
>>>>> 0000000000464720 T darshan_core_register_module
>>>>> 0000000000464b80 T darshan_core_register_record
>>>>> 0000000000461c10 T darshan_core_shutdown
>>>>> 0000000000464840 T darshan_core_unregister_module
>>>>> 0000000000465310 T darshan_core_unregister_record
>>>>> 00000000004655b0 T darshan_core_wtime
>>>>> 0000000000463e40 t darshan_deflate_buffer..0
>>>>> 00000000004642b0 t darshan_get_shared_records
>>>>> 00000000004806a0 T darshan_hash
>>>>> 0000000000fcfab0 d darshan_mem_alignment
>>>>> 0000000000fcfaec d darshan_mem_alignment
>>>>> 0000000000b64848 r darshan_module_names
>>>>> 0000000000b6487c r darshan_module_versions
>>>>> 0000000000479ae0 T darshan_mpiio_shutdown_bench_setup
>>>>> 0000000000fcfa20 D darshan_path_exclusions
>>>>> 0000000000470250 T darshan_posix_shutdown_bench_setup
>>>>> 0000000000461750 T darshan_shutdown_bench
>>>>> 0000000000483230 T darshan_variance_reduce
>>>>> 0000000000483170 T darshan_walk_common_vals
>>>>>
>>>>>
>>>>>
>>>>> The problem I am having now is that I don’t get a darshan log file when I run my executable. I configured darshan with this configure flag “--with-log-path-by-env=DARSHANLOGPATH”.  I set that environment variable in my shell to some directory and then run my executable.  The executable runs fine but no darshan log file gets generated.
>>>>>
>>>>> Any ideas?   Thanks in advanced.
>>>>>
>>>>> Rene
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On 5/20/16, 7:33 AM, "darshan-users-bounces at lists.mcs.anl.gov on behalf of Phil Carns" <darshan-users-bounces at lists.mcs.anl.gov on behalf of carns at mcs.anl.gov> wrote:
>>>>>
>>>>>> Hi Rene,
>>>>>>
>>>>>> Does mpiifort support the "-show" option (or something similar) to
>>>>>> display the underlying link command that it is using?
>>>>>>
>>>>>> I can describe the manual steps needed to add Darshan support just to
>>>>>> see if you can get it working, and then we can maybe circle back to why
>>>>>> the Darshan-generated script is triggering dynamic linking.  In general,
>>>>>> there are two sets of extra link flags that Darshan needs to add to the
>>>>>> link command.  Some should appear before the user-specified libraries in
>>>>>> the true link command, some should appear after them.  On my test box
>>>>>> they happen to look like this:
>>>>>>
>>>>>> pcarns at carns-x1:~/working/install/bin$ darshan-config --pre-ld-flags
>>>>>> -L/home/pcarns/working/install/lib -ldarshan -lz
>>>>>> -Wl,@/home/pcarns/working/install/share/ld-opts/darshan-base-ld-opts
>>>>>>
>>>>>> pcarns at carns-x1:~/working/install/bin$ darshan-config --post-ld-flags
>>>>>> -L/home/pcarns/working/install/lib -Wl,--start-group -ldarshan
>>>>>> -ldarshan-stubs -Wl,--end-group -lz -lrt -lpthread
>>>>>>
>>>>>> On most MPI implementations there is also one more library that you have
>>>>>> to add for MPI function wrapping to work.  Darshan-config doesn't add
>>>>>> that because it is dependent on the MPI implementation. On most versions
>>>>>> of MPI this is either -lmpifort or -lfmpich.  I'm not sure which one
>>>>>> Intel MPI uses now.
>>>>>>
>>>>>> You might be able to do all of that external to mpiifort, but I'm not
>>>>>> sure.  If so, then your link command would look something like this
>>>>>> (possibly swapping out -lmpifort for -lfmpich or omitting it entirely):
>>>>>>
>>>>>> mpiifort -DLinux -Dx86_64 -DINTEL -DCompLevel16 -DX86_64 -D__x86_64__ -static-intel -static_mpi -DOpenMP -openmp -DSWAP -O3 -g -o ./ddswriter.revUnversioned -lmpifort `darshan-config --pre-ld-flags` PartitionLayout_mod.o ddswriter.o -L/tstapps/asi/src/dds/lib/Linux/3.0/x86_64/Intel-16.0.1.150MPI5.1.2MKL11.3.1 -ldds_r3 -lgio -lfhost `darshan-config --post-ld-flags`
>>>>>>
>>>>>>
>>>>>> If that doesn't work then you'll need to find out what the underlying
>>>>>> link command looks like (via -show or similar) and insert it there.
>>>>>>
>>>>>> thanks,
>>>>>> -Phil
>>>>>>
>>>>> _______________________________________________
>>>>> Darshan-users mailing list
>>>>> Darshan-users at lists.mcs.anl.gov
>>>>> https://lists.mcs.anl.gov/mailman/listinfo/darshan-users



More information about the Darshan-users mailing list