<br><font size=2 face="sans-serif">Ah I see. </font>
<br>
<br><font size=2 face="sans-serif">The mpixl* scripts in comm/bin are generated
separately from the mpi* scripts there. Basically, the comm/bin libraries
are built with the gnu compilers with asserts and debuging left on. We
basically copy-and-modify the mpi* scripts that the mpich build process
generates to make the mpixl* scripts. They build your application with
the XL compilers, but still use the gnu-compiler-built MPI/DCMF libraries.</font>
<br>
<br><font size=2 face="sans-serif">The libraries in comm/fast are compiled
using the XL compilers with minimal asserts and a lot of debug turned off.
They tend to make a reasonable improvement for point-to-point operations
and not much of an improvement for collectives. We generally recommend
you make sure your app runs fine using the comm/bin scripts first, then,
yes, for benchmark results you can recompile against the comm/fast libraries.</font>
<br>
<br>
<br><font size=2 face="sans-serif"><br>
Brian Smith (smithbr@us.ibm.com)<br>
BlueGene MPI Development/<br>
Communications Team Lead<br>
IBM Rochester<br>
Phone: 507 253 4717</font>
<br>
<br>
<br>
<table width=100%>
<tr valign=top>
<td width=40%><font size=1 face="sans-serif"><b>chan@mcs.anl.gov</b> </font>
<p><font size=1 face="sans-serif">11/23/2009 11:42 PM</font>
<table border>
<tr valign=top>
<td bgcolor=white>
<div align=center><font size=1 face="sans-serif">Please respond to<br>
Anthony Chan <chan@mcs.anl.gov></font></div></table>
<br>
<td width=59%>
<table width=100%>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">To</font></div>
<td><font size=1 face="sans-serif">Brian Smith/Rochester/IBM@IBMUS</font>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">cc</font></div>
<td><font size=1 face="sans-serif">mpich2-dev@mcs.anl.gov, mpich2-dev-bounces@mcs.anl.gov</font>
<tr valign=top>
<td>
<div align=right><font size=1 face="sans-serif">Subject</font></div>
<td><font size=1 face="sans-serif">Re: [mpich2-dev] wrapping library calls</font></table>
<br>
<table>
<tr valign=top>
<td>
<td></table>
<br></table>
<br>
<br>
<br><tt><font size=2><br>
Hi Brian,<br>
<br>
I was refering to mpixlc provided by darshan:<br>
<br>
login1.surveyor:~ > which mpixlc<br>
/soft/apps/darshan/bin/mpixlc<br>
<br>
It used to be the only mpixlc* available on our P<br>
are from comm/bin. But our system folks have pointed<br>
default mpixlc* to the ones provided by darshan recently.<br>
<br>
The mpixlc* in comm/bin are fine. AFAICT, they are just<br>
MPICH2's mpicc wrappers.<br>
<br>
I didn't know there are comm/fast/bin/mpixlc*. It seems<br>
comm/fast/bin version links with dcmf-fast/dcmfcoll-fast<br>
libraries instead of the usual dcmf/dcmfcoll libraries.<br>
Are the fast version of dcmf libraries the optimized<br>
version and should be the ones used for benchmarking... ?<br>
<br>
A.Chan<br>
<br>
----- "Brian Smith" <smithbr@us.ibm.com> wrote:<br>
<br>
> The mpixl* scripts in the comm/bin directory are generated by a<br>
> somewhat <br>
> hackish script, so I'm not entirely surprised they break the profile
<br>
> mechanism.<br>
> <br>
> I bet the ones in comm/fast/bin work though.<br>
> <br>
> I'll try to take a look at it.<br>
> <br>
> <br>
> Brian Smith (smithbr@us.ibm.com)<br>
> BlueGene MPI Development/<br>
> Communications Team Lead<br>
> IBM Rochester<br>
> Phone: 507 253 4717<br>
> <br>
> <br>
> <br>
> chan@mcs.anl.gov <br>
> Sent by: mpich2-dev-bounces@mcs.anl.gov<br>
> 11/19/2009 04:32 PM<br>
> Please respond to<br>
> Anthony Chan <chan@mcs.anl.gov>; Please respond to<br>
> mpich2-dev@mcs.anl.gov<br>
> <br>
> <br>
> To<br>
> mpich2-dev@mcs.anl.gov<br>
> cc<br>
> <br>
> Subject<br>
> Re: [mpich2-dev] wrapping library calls<br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> <br>
> MPICH2's compiler wrappers support this profile mechanism<br>
> (MPE uses a special version of this mechanism to insert<br>
> its profiling libraries through switch -mpe=). I would think<br>
> that if one defines a darshan.conf that defines<br>
> PROFILE_{PRE|POST}LIB and do<br>
> mpicc -profile=darshan .... (or something like that).<br>
> Thing should work. If it does not work, I would like to<br>
> know why and to enhance mpi* wrappers in MPICH2 to accomodate<br>
> that.<br>
> <br>
> The modified mpixlc available on BG/P does not seem to<br>
> take advantage the profile mechanism.<br>
> <br>
> A.Chan<br>
> <br>
> ----- "Rob Latham" <robl@mcs.anl.gov> wrote:<br>
> <br>
> > Hey folks<br>
> > <br>
> > Phil's put together this slick tool called 'darshan'[1]. Darshan<br>
> > uses the MPI profiling interface to wrap MPI calls, and linker<br>
> tricks<br>
> > (-wrap) to wrap posix I/O calls.<br>
> > <br>
> > We have to modify the MPI wrapper scripts so that darshan's<br>
> > implementation of the wrapped symbols gets linked in. Each
driver<br>
> > update requires a new set of wrapper shell scripts.<br>
> > <br>
> > Is there some way through the command line that we can pass the<br>
> flags<br>
> > '-Wl,-wrap,read,-wrap,write,...' and '-L/path/to/darshan<br>
> > -ldarshan-posix' ? <br>
> > <br>
> > What would be great is if darshan could ship an mpicc wrapper
that<br>
> > was<br>
> > just one line "/path/to/real/mpicc -Wl,-wrap,read,-wrap,write...
$@<br>
> > -L/path/to/darshan -ldarshan-io". <br>
> > <br>
> > Unfortunately when I try this, the -Wl and -L flags I pass in
go in<br>
> > very early in the generated command line ("${allargs[@]}"
). <br>
> > <br>
> > Phil wrote a script to take the output of 'mpicc -show' and munge<br>
> the<br>
> > command line accordingly, so this is kind of a solved problem
for<br>
> > Darshan. I was just wondering if there was an approach
I had not<br>
> > considered in case we want to wrap function calls for some other<br>
> > project in the future.<br>
> > <br>
> > [1]http://press.mcs.anl.gov/darshan/<br>
> > <br>
> > Thanks<br>
> > ==rob<br>
> > <br>
> > -- <br>
> > Rob Latham<br>
> > Mathematics and Computer Science Division<br>
> > Argonne National Lab, IL USA<br>
</font></tt>
<br>