Using parallel write with subset of processors

Jim Edwards jedwards at ucar.edu
Tue May 10 10:58:31 CDT 2022


I'll add a plug for the ParallelIO library
<https://github.com/NCAR/ParallelIO> which allows you fine grained control
of the number of IO tasks for pnetcdf or netcdf4.


On Tue, May 10, 2022 at 9:54 AM Latham, Robert J. <robl at mcs.anl.gov> wrote:

> On Mon, 2022-05-09 at 12:51 -0700, Pascale Garaud wrote:
> > Is there a best way to do that efficiently? The code currently uses a
> > collective  PUT_VAR_ALL to write the 3D dataset to file, but that
> > would not work for the slice (and hangs when I try).
>
> These collective I/O calls require all processes to participate in the
> call... but not all processes need to have data.
>
> For proceses that do not have any data, you can set the 'count': Consider
> the common "put_vara_float_all" call for one example:
>
> int ncmpi_put_vara_float_all(int ncid, int varid, const MPI_Offset *start,
>                  const MPI_Offset *count, const float *op);
>
> that 'count' parameter can just be an N dimensional array of 0 for the
> processes with no data
>
> > I could just copy the whole data for the slice into a single
> > processor, and then do an "independent" write for that processor, but
> > that doesn't seem to be very efficient.
>
> indeed! please don't do this
>
> > I tried to understand how to use IPUT instead, but I am very confused
> > about the syntax / procedure, especially given that all of the
> > examples I have seen end up using all processors for the write.
>
> IPUT is a fun optimization.  Once you get the hang of the "blocking"
> versions, revisit the "non-blocking" routines, especially if you have
> writes to multiple variables.
>
> ==rob
>


-- 
Jim Edwards

CESM Software Engineer
National Center for Atmospheric Research
Boulder, CO
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20220510/4340830b/attachment.html>


More information about the parallel-netcdf mailing list