mpi_test analog

Rob Latham robl at mcs.anl.gov
Mon Feb 20 15:27:14 CST 2012


On Mon, Feb 20, 2012 at 02:15:03PM -0700, Jim Edwards wrote:
> The memory overhead is on the application end, I need to make a copy of the
> iobuffer, because it is potentially reused for each call and if I do
> non-blocking calls it needs to persist.

that memory overhead may be a small price to pay if it results in
better record variable I/O.

the interleaved records really mess up some common collective i/o
assumptions.  If however you write out all your record variables in
one shot, the i/o request is no longer "aggressively noncontiguous"
and performance should be a lot better.

==rob

> On Mon, Feb 20, 2012 at 2:01 PM, Wei-keng Liao
> <wkliao at ece.northwestern.edu>wrote:
> 
> > We have a few yet-to-be-published results (using FLASH and a climate
> > application) that show significant improvement of using these
> > nonblocking (or we should refer them as "aggregating") APIs.
> > Below is a scenario that these APIs can be useful. Say, one
> > application has many small sized variables defined and they all
> > will be written to a file at about the same time, like checkpointing.
> > The blocking APIs allow writing/reading one variable at a time,
> > so the performance can be poor. The nonblocking APIs aggregate
> > multiple requests into a larger one and hence can result in better
> > performance.
> >
> > There is not much overhead for the additional memory management,
> > as the APIs do not introduce extra memory copy, nor allocating
> > additional buffers. The essence of these APIs are to create a
> > new, combined MPI file view for the single collective I/O at
> > the end.
> >
> >
> > Wei-keng
> >
> >
> > On Feb 20, 2012, at 2:23 PM, Jim Edwards wrote:
> >
> > > Hi Wei-keng,
> > >
> > > That's the answer that I thought I would get, and no I guess there is no
> > point in having one.   Is there a demonstrable performance benefit of this
> > non-blocking interface that would make it worth taking on the additional
> > memory management that it will require?
> > >
> > > Jim
> > >
> > > On Mon, Feb 20, 2012 at 1:18 PM, Wei-keng Liao <
> > wkliao at ece.northwestern.edu> wrote:
> > > Hi, Jim,
> > >
> > > The "non-blocking" APIs in pnetcdf are not truly asynchronous.
> > > They actually defer the I/O requests till ncmpi_wait_all.
> > > So, if there is a corresponding test call and it is called
> > > in between the post of nonblocking and wait, it will simply
> > > return false, indicating not yet complete.
> > >
> > > Given that, would you still like to see a test API available
> > > in pnetcdf? (That will not be too hard to add one.)
> > >
> > >
> > > Wei-keng
> > >
> > >
> > > On Feb 20, 2012, at 1:47 PM, Jim Edwards wrote:
> > >
> > > > I am working on an async interface using pnetcdf and wondering why
> > there is no analog to mpi_test in the API?
> > > >
> > > > --
> > > > Jim Edwards
> > > >
> > > > CESM Software Engineering Group
> > > > National Center for Atmospheric Research
> > > > Boulder, CO
> > > > 303-497-1842
> > > >
> > >
> > >
> > >
> > >
> > > --
> > > Jim Edwards
> > >
> > >
> > >
> >
> >
> 
> 

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA


More information about the parallel-netcdf mailing list