[mpich2-dev] wrapping library calls
Brian Smith
smithbr at us.ibm.com
Tue Nov 24 06:17:33 CST 2009
Ah I see.
The mpixl* scripts in comm/bin are generated separately from the mpi*
scripts there. Basically, the comm/bin libraries are built with the gnu
compilers with asserts and debuging left on. We basically copy-and-modify
the mpi* scripts that the mpich build process generates to make the mpixl*
scripts. They build your application with the XL compilers, but still use
the gnu-compiler-built MPI/DCMF libraries.
The libraries in comm/fast are compiled using the XL compilers with
minimal asserts and a lot of debug turned off. They tend to make a
reasonable improvement for point-to-point operations and not much of an
improvement for collectives. We generally recommend you make sure your app
runs fine using the comm/bin scripts first, then, yes, for benchmark
results you can recompile against the comm/fast libraries.
Brian Smith (smithbr at us.ibm.com)
BlueGene MPI Development/
Communications Team Lead
IBM Rochester
Phone: 507 253 4717
chan at mcs.anl.gov
11/23/2009 11:42 PM
Please respond to
Anthony Chan <chan at mcs.anl.gov>
To
Brian Smith/Rochester/IBM at IBMUS
cc
mpich2-dev at mcs.anl.gov, mpich2-dev-bounces at mcs.anl.gov
Subject
Re: [mpich2-dev] wrapping library calls
Hi Brian,
I was refering to mpixlc provided by darshan:
login1.surveyor:~ > which mpixlc
/soft/apps/darshan/bin/mpixlc
It used to be the only mpixlc* available on our P
are from comm/bin. But our system folks have pointed
default mpixlc* to the ones provided by darshan recently.
The mpixlc* in comm/bin are fine. AFAICT, they are just
MPICH2's mpicc wrappers.
I didn't know there are comm/fast/bin/mpixlc*. It seems
comm/fast/bin version links with dcmf-fast/dcmfcoll-fast
libraries instead of the usual dcmf/dcmfcoll libraries.
Are the fast version of dcmf libraries the optimized
version and should be the ones used for benchmarking... ?
A.Chan
----- "Brian Smith" <smithbr at us.ibm.com> wrote:
> The mpixl* scripts in the comm/bin directory are generated by a
> somewhat
> hackish script, so I'm not entirely surprised they break the profile
> mechanism.
>
> I bet the ones in comm/fast/bin work though.
>
> I'll try to take a look at it.
>
>
> Brian Smith (smithbr at us.ibm.com)
> BlueGene MPI Development/
> Communications Team Lead
> IBM Rochester
> Phone: 507 253 4717
>
>
>
> chan at mcs.anl.gov
> Sent by: mpich2-dev-bounces at mcs.anl.gov
> 11/19/2009 04:32 PM
> Please respond to
> Anthony Chan <chan at mcs.anl.gov>; Please respond to
> mpich2-dev at mcs.anl.gov
>
>
> To
> mpich2-dev at mcs.anl.gov
> cc
>
> Subject
> Re: [mpich2-dev] wrapping library calls
>
>
>
>
>
>
>
> MPICH2's compiler wrappers support this profile mechanism
> (MPE uses a special version of this mechanism to insert
> its profiling libraries through switch -mpe=). I would think
> that if one defines a darshan.conf that defines
> PROFILE_{PRE|POST}LIB and do
> mpicc -profile=darshan .... (or something like that).
> Thing should work. If it does not work, I would like to
> know why and to enhance mpi* wrappers in MPICH2 to accomodate
> that.
>
> The modified mpixlc available on BG/P does not seem to
> take advantage the profile mechanism.
>
> A.Chan
>
> ----- "Rob Latham" <robl at mcs.anl.gov> wrote:
>
> > Hey folks
> >
> > Phil's put together this slick tool called 'darshan'[1]. Darshan
> > uses the MPI profiling interface to wrap MPI calls, and linker
> tricks
> > (-wrap) to wrap posix I/O calls.
> >
> > We have to modify the MPI wrapper scripts so that darshan's
> > implementation of the wrapped symbols gets linked in. Each driver
> > update requires a new set of wrapper shell scripts.
> >
> > Is there some way through the command line that we can pass the
> flags
> > '-Wl,-wrap,read,-wrap,write,...' and '-L/path/to/darshan
> > -ldarshan-posix' ?
> >
> > What would be great is if darshan could ship an mpicc wrapper that
> > was
> > just one line "/path/to/real/mpicc -Wl,-wrap,read,-wrap,write... $@
> > -L/path/to/darshan -ldarshan-io".
> >
> > Unfortunately when I try this, the -Wl and -L flags I pass in go in
> > very early in the generated command line ("${allargs[@]}" ).
> >
> > Phil wrote a script to take the output of 'mpicc -show' and munge
> the
> > command line accordingly, so this is kind of a solved problem for
> > Darshan. I was just wondering if there was an approach I had not
> > considered in case we want to wrap function calls for some other
> > project in the future.
> >
> > [1]http://press.mcs.anl.gov/darshan/
> >
> > Thanks
> > ==rob
> >
> > --
> > Rob Latham
> > Mathematics and Computer Science Division
> > Argonne National Lab, IL USA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich2-dev/attachments/20091124/f9baac1a/attachment.htm>
More information about the mpich2-dev
mailing list