<div dir="ltr"><div class="gmail_default" style="font-family:comic sans ms,sans-serif;color:rgb(56,118,29)">Wei-king,<br><br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;color:rgb(56,118,29)">There is a discrepancy between the documentation and the code with respect to the varn functions. The documentation at <a href="http://cucis.ece.northwestern.edu/projects/PNETCDF/doc/pnetcdf-c/ncmpi_005fput_005fvarn_005f_003ctype_003e.html">http://cucis.ece.northwestern.edu/projects/PNETCDF/doc/pnetcdf-c/ncmpi_005fput_005fvarn_005f_003ctype_003e.html</a> has:<br><br><pre class="">int ncmpi_put_varn_all (int ncid,
int varid,
int num,
const MPI_Offset starts[num][],
const MPI_Offset counts[num][],
const void *bufs[num],
MPI_Offset bufcounts[num],
MPI_Datatype buftypes[num]);<br><br><br></pre><pre class="">While the source trunk has:<br><br><br></pre>int ncmpi_put_varn_all(int ncid, int varid, int num, MPI_Offset* const starts[],<br> MPI_Offset* const counts[], const void *buf, MPI_Offset bufcount,<br> MPI_Datatype buftype);<br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;color:rgb(56,118,29)">The last three arguments are not arrays. <br><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;color:rgb(56,118,29)">- Jim<br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;color:rgb(56,118,29)"><br></div><div class="gmail_default" style="font-family:comic sans ms,sans-serif;color:rgb(56,118,29)"><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Sep 24, 2014 at 6:44 PM, Wei-keng Liao <span dir="ltr"><<a href="mailto:wkliao@eecs.northwestern.edu" target="_blank">wkliao@eecs.northwestern.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
If the data is contiguous in memory, then there is no need to use varm or flexible APIs.<br>
<br>
There is a new set of APIs named varn (available in PnetCDF version 1.4.0 and later), eg.<br>
ncmpi_put_varn_float_all()<br>
It allows a single API call to write a contiguous buffer to a set of noncontiguous places in file.<br>
Each noncontiguous place is specified by a (start, count) pair. The start-count pairs can be<br>
arbitrary in file offsets (i.e. unsorted order in offsets).<br>
Please note this API family is blocking. There is no nonblocking counterpart.<br>
<br>
In term of performance, this call is equivalent to making multiple iput or bput calls.<br>
<span class="HOEnZb"><font color="#888888"><br>
Wei-keng<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
On Sep 24, 2014, at 6:58 PM, Jim Edwards wrote:<br>
<br>
> Data is contiguous in memory but data on a given task maps to various non contiguous points in the file. I can guarantee that the data in memory on a given mpi task is in monotonically increasing order with respect to offsets into the file, but not more than that.<br>
><br>
> On Wed, Sep 24, 2014 at 3:43 PM, Wei-keng Liao <<a href="mailto:wkliao@eecs.northwestern.edu">wkliao@eecs.northwestern.edu</a>> wrote:<br>
> Hi, Jim<br>
><br>
> Do you mean the local I/O buffer contains a list of non-contiguous data in memory?<br>
> Or do you mean "distributed" as data is partitioned across multiple MPI processes?<br>
><br>
> The varm APIs and the "flexible" APIs that take an MPI derived datatype argument<br>
> are for users to describe non-contiguous data in the local I/O buffer. The imap<br>
> and MPI datatype argument has no effect to the data access in files. So, I need<br>
> to know which case you are referring to first.<br>
><br>
> Thanks for pointing out the error in the user guide. It is fixed.<br>
><br>
> Wei-keng<br>
><br>
> On Sep 24, 2014, at 2:30 PM, Jim Edwards wrote:<br>
><br>
> > I want to write a distributed variable to a file and the way the<br>
> > data is distributed is fairly random with respect to the ordering on the file.<br>
> ><br>
> > It seems like I can do several things from each task in order to write the data -<br>
> ><br>
> > • I can specify several blocks of code using start and count and make mulitple calls on each task to ncmpi_bput_vara_all<br>
> > • I can define an MPI derived type and make a single call to ncmpi_bput_var_all on each task<br>
> > • I (think I) can use ncmpi_bput_varm_all and specify an imap (btw: the pnetcdf users guide has this interface wrong)<br>
> > Are any of these better from a performance standpoint?<br>
> ><br>
> > Thanks,<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > Jim Edwards<br>
> ><br>
> > CESM Software Engineer<br>
> > National Center for Atmospheric Research<br>
> > Boulder, CO<br>
><br>
><br>
><br>
><br>
> --<br>
> Jim Edwards<br>
><br>
> CESM Software Engineer<br>
> National Center for Atmospheric Research<br>
> Boulder, CO<br>
<br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div dir="ltr"><div><div><div>Jim Edwards<br><br></div><font size="1">CESM Software Engineer<br></font></div><font size="1">National Center for Atmospheric Research<br></font></div><font size="1">Boulder, CO</font> <br></div>
</div>