Wei-king,<br><br>I'm not sure I understand this or what it would give me that I don't already have available.<br><br><div class="gmail_quote">On Mon, Feb 20, 2012 at 4:42 PM, Wei-keng Liao <span dir="ltr"><<a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
(I should have replied my previous email to the pnetcdf list.)<br>
<br>
We are still working on this new API.<br>
The usage will be something like below.<br>
<br>
buf = malloc(buf_size);<br>
ncmpi_buffer_attach(ncid, buf, buf_size);<br>
<br>
ncmpi_iput_vara_float(...);<br>
ncmpi_iput_vara_float(...);<br>
...<br>
ncmpi_wait_all(...);<br>
<br>
ncmpi_buffer_detach(ncid);<br>
free(buf);<br>
<br>
<br>
Wei-keng<br>
<div class="HOEnZb"><div class="h5"><br>
<br>
On Feb 20, 2012, at 5:24 PM, Jim Edwards wrote:<br>
<br>
> Yes - If I understand you correctly I think that it would. So I would provide the buffer and pnetcdf would handle the task of freeing the memory when the buffer is no longer required?<br>
><br>
> On Mon, Feb 20, 2012 at 3:38 PM, Wei-keng Liao <<a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>> wrote:<br>
><br>
> We (the pnetcdf developers) have been talking about creating a new<br>
> set of APIs analogs to MPI_Bsend, send with user-provided buffering.<br>
> This requires the user to provide a buffer and its size for pnetcdf<br>
> to copy the requests over and to be flushed later. Hence, this memory<br>
> management in pnetcdf can ease the programming work at the application<br>
> end. Will this make sense to your applications?<br>
><br>
><br>
> Wei-keng<br>
><br>
><br>
> On Feb 20, 2012, at 3:27 PM, Rob Latham wrote:<br>
><br>
> > On Mon, Feb 20, 2012 at 02:15:03PM -0700, Jim Edwards wrote:<br>
> >> The memory overhead is on the application end, I need to make a copy of the<br>
> >> iobuffer, because it is potentially reused for each call and if I do<br>
> >> non-blocking calls it needs to persist.<br>
> ><br>
> > that memory overhead may be a small price to pay if it results in<br>
> > better record variable I/O.<br>
> ><br>
> > the interleaved records really mess up some common collective i/o<br>
> > assumptions. If however you write out all your record variables in<br>
> > one shot, the i/o request is no longer "aggressively noncontiguous"<br>
> > and performance should be a lot better.<br>
> ><br>
> > ==rob<br>
> ><br>
> >> On Mon, Feb 20, 2012 at 2:01 PM, Wei-keng Liao<br>
> >> <<a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>>wrote:<br>
> >><br>
> >>> We have a few yet-to-be-published results (using FLASH and a climate<br>
> >>> application) that show significant improvement of using these<br>
> >>> nonblocking (or we should refer them as "aggregating") APIs.<br>
> >>> Below is a scenario that these APIs can be useful. Say, one<br>
> >>> application has many small sized variables defined and they all<br>
> >>> will be written to a file at about the same time, like checkpointing.<br>
> >>> The blocking APIs allow writing/reading one variable at a time,<br>
> >>> so the performance can be poor. The nonblocking APIs aggregate<br>
> >>> multiple requests into a larger one and hence can result in better<br>
> >>> performance.<br>
> >>><br>
> >>> There is not much overhead for the additional memory management,<br>
> >>> as the APIs do not introduce extra memory copy, nor allocating<br>
> >>> additional buffers. The essence of these APIs are to create a<br>
> >>> new, combined MPI file view for the single collective I/O at<br>
> >>> the end.<br>
> >>><br>
> >>><br>
> >>> Wei-keng<br>
> >>><br>
> >>><br>
> >>> On Feb 20, 2012, at 2:23 PM, Jim Edwards wrote:<br>
> >>><br>
> >>>> Hi Wei-keng,<br>
> >>>><br>
> >>>> That's the answer that I thought I would get, and no I guess there is no<br>
> >>> point in having one. Is there a demonstrable performance benefit of this<br>
> >>> non-blocking interface that would make it worth taking on the additional<br>
> >>> memory management that it will require?<br>
> >>>><br>
> >>>> Jim<br>
> >>>><br>
> >>>> On Mon, Feb 20, 2012 at 1:18 PM, Wei-keng Liao <<br>
> >>> <a href="mailto:wkliao@ece.northwestern.edu">wkliao@ece.northwestern.edu</a>> wrote:<br>
> >>>> Hi, Jim,<br>
> >>>><br>
> >>>> The "non-blocking" APIs in pnetcdf are not truly asynchronous.<br>
> >>>> They actually defer the I/O requests till ncmpi_wait_all.<br>
> >>>> So, if there is a corresponding test call and it is called<br>
> >>>> in between the post of nonblocking and wait, it will simply<br>
> >>>> return false, indicating not yet complete.<br>
> >>>><br>
> >>>> Given that, would you still like to see a test API available<br>
> >>>> in pnetcdf? (That will not be too hard to add one.)<br>
> >>>><br>
> >>>><br>
> >>>> Wei-keng<br>
> >>>><br>
> >>>><br>
> >>>> On Feb 20, 2012, at 1:47 PM, Jim Edwards wrote:<br>
> >>>><br>
> >>>>> I am working on an async interface using pnetcdf and wondering why<br>
> >>> there is no analog to mpi_test in the API?<br>
> >>>>><br>
> >>>>> --<br>
> >>>>> Jim Edwards<br>
> >>>>><br>
> >>>>> CESM Software Engineering Group<br>
> >>>>> National Center for Atmospheric Research<br>
> >>>>> Boulder, CO<br>
> >>>>> <a href="tel:303-497-1842" value="+13034971842">303-497-1842</a><br>
> >>>>><br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>>> --<br>
> >>>> Jim Edwards<br>
> >>>><br>
> >>>><br>
> >>>><br>
> >>><br>
> >>><br>
> >><br>
> >><br>
> ><br>
> > --<br>
> > Rob Latham<br>
> > Mathematics and Computer Science Division<br>
> > Argonne National Lab, IL USA<br>
><br>
><br>
><br>
><br>
> --<br>
> Jim Edwards<br>
><br>
><br>
><br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><pre>Jim Edwards<br><br><br></pre><br>