mpi_test analog

Jim Edwards edwards.jim at gmail.com
Tue Feb 21 17:56:35 CST 2012


Wei-king,

I'm not sure I understand this or what it would give me that I don't
already have available.

On Mon, Feb 20, 2012 at 4:42 PM, Wei-keng Liao
<wkliao at ece.northwestern.edu>wrote:

>
> (I should have replied my previous email to the pnetcdf list.)
>
> We are still working on this new API.
> The usage will be something like below.
>
>    buf = malloc(buf_size);
>    ncmpi_buffer_attach(ncid, buf, buf_size);
>
>    ncmpi_iput_vara_float(...);
>    ncmpi_iput_vara_float(...);
>    ...
>    ncmpi_wait_all(...);
>
>    ncmpi_buffer_detach(ncid);
>    free(buf);
>
>
> Wei-keng
>
>
> On Feb 20, 2012, at 5:24 PM, Jim Edwards wrote:
>
> > Yes - If I understand you correctly I think that it would.   So I would
> provide the buffer and pnetcdf would handle the task of freeing the memory
> when the buffer is no longer required?
> >
> > On Mon, Feb 20, 2012 at 3:38 PM, Wei-keng Liao <
> wkliao at ece.northwestern.edu> wrote:
> >
> > We (the pnetcdf developers) have been talking about creating a new
> > set of APIs analogs to MPI_Bsend, send with user-provided buffering.
> > This requires the user to provide a buffer and its size for pnetcdf
> > to copy the requests over and to be flushed later. Hence, this memory
> > management in pnetcdf can ease the programming work at the application
> > end. Will this make sense to your applications?
> >
> >
> > Wei-keng
> >
> >
> > On Feb 20, 2012, at 3:27 PM, Rob Latham wrote:
> >
> > > On Mon, Feb 20, 2012 at 02:15:03PM -0700, Jim Edwards wrote:
> > >> The memory overhead is on the application end, I need to make a copy
> of the
> > >> iobuffer, because it is potentially reused for each call and if I do
> > >> non-blocking calls it needs to persist.
> > >
> > > that memory overhead may be a small price to pay if it results in
> > > better record variable I/O.
> > >
> > > the interleaved records really mess up some common collective i/o
> > > assumptions.  If however you write out all your record variables in
> > > one shot, the i/o request is no longer "aggressively noncontiguous"
> > > and performance should be a lot better.
> > >
> > > ==rob
> > >
> > >> On Mon, Feb 20, 2012 at 2:01 PM, Wei-keng Liao
> > >> <wkliao at ece.northwestern.edu>wrote:
> > >>
> > >>> We have a few yet-to-be-published results (using FLASH and a climate
> > >>> application) that show significant improvement of using these
> > >>> nonblocking (or we should refer them as "aggregating") APIs.
> > >>> Below is a scenario that these APIs can be useful. Say, one
> > >>> application has many small sized variables defined and they all
> > >>> will be written to a file at about the same time, like checkpointing.
> > >>> The blocking APIs allow writing/reading one variable at a time,
> > >>> so the performance can be poor. The nonblocking APIs aggregate
> > >>> multiple requests into a larger one and hence can result in better
> > >>> performance.
> > >>>
> > >>> There is not much overhead for the additional memory management,
> > >>> as the APIs do not introduce extra memory copy, nor allocating
> > >>> additional buffers. The essence of these APIs are to create a
> > >>> new, combined MPI file view for the single collective I/O at
> > >>> the end.
> > >>>
> > >>>
> > >>> Wei-keng
> > >>>
> > >>>
> > >>> On Feb 20, 2012, at 2:23 PM, Jim Edwards wrote:
> > >>>
> > >>>> Hi Wei-keng,
> > >>>>
> > >>>> That's the answer that I thought I would get, and no I guess there
> is no
> > >>> point in having one.   Is there a demonstrable performance benefit
> of this
> > >>> non-blocking interface that would make it worth taking on the
> additional
> > >>> memory management that it will require?
> > >>>>
> > >>>> Jim
> > >>>>
> > >>>> On Mon, Feb 20, 2012 at 1:18 PM, Wei-keng Liao <
> > >>> wkliao at ece.northwestern.edu> wrote:
> > >>>> Hi, Jim,
> > >>>>
> > >>>> The "non-blocking" APIs in pnetcdf are not truly asynchronous.
> > >>>> They actually defer the I/O requests till ncmpi_wait_all.
> > >>>> So, if there is a corresponding test call and it is called
> > >>>> in between the post of nonblocking and wait, it will simply
> > >>>> return false, indicating not yet complete.
> > >>>>
> > >>>> Given that, would you still like to see a test API available
> > >>>> in pnetcdf? (That will not be too hard to add one.)
> > >>>>
> > >>>>
> > >>>> Wei-keng
> > >>>>
> > >>>>
> > >>>> On Feb 20, 2012, at 1:47 PM, Jim Edwards wrote:
> > >>>>
> > >>>>> I am working on an async interface using pnetcdf and wondering why
> > >>> there is no analog to mpi_test in the API?
> > >>>>>
> > >>>>> --
> > >>>>> Jim Edwards
> > >>>>>
> > >>>>> CESM Software Engineering Group
> > >>>>> National Center for Atmospheric Research
> > >>>>> Boulder, CO
> > >>>>> 303-497-1842
> > >>>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>>
> > >>>> --
> > >>>> Jim Edwards
> > >>>>
> > >>>>
> > >>>>
> > >>>
> > >>>
> > >>
> > >>
> > >
> > > --
> > > Rob Latham
> > > Mathematics and Computer Science Division
> > > Argonne National Lab, IL USA
> >
> >
> >
> >
> > --
> > Jim Edwards
> >
> >
> >
>
>


-- 

Jim Edwards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20120221/8b443675/attachment.htm>


More information about the parallel-netcdf mailing list