use of MPI derived types in Flexible API

Jim Edwards jedwards at ucar.edu
Mon Sep 29 13:48:08 CDT 2014


Hi Wei-keng,

This turned out to be my error.   Things are working now.

On Fri, Sep 26, 2014 at 4:00 PM, Wei-keng Liao <wkliao at eecs.northwestern.edu
> wrote:

>
> Do you have more error messages from the crash? coredump trace?
> Could you send us a small program that can reproduce the crash?
>
> Wei-keng
>
> On Sep 26, 2014, at 4:34 PM, Jim Edwards wrote:
>
> >
> > Making progress, but I sometimes have num=0 on some IO tasks, this is
> causing a crash in pnetcdf.   I also tried setting num=1 and count=0 with
> the same effect.
> >
> > On Fri, Sep 26, 2014 at 10:06 AM, Wei-keng Liao <
> wkliao at eecs.northwestern.edu> wrote:
> > Hi, Jim
> >
> > The C documentation is wrong. I will fix that. Thanks for catching the
> mistake.
> > Please use it as described in pnetcdf.h
> >
> > int ncmpi_put_varn_all        (int               ncid,
> >                                int               varid,
> >                                int               num,
> >                                const MPI_Offset  starts[num][],
> >                                const MPI_Offset  counts[num][],
> >                                const void       *bufs,
> >                                MPI_Offset        bufcounts,
> >                                MPI_Datatype      buftypes);
> >
> > FYI. There are a few examples in C and Fortran under examples:
> >   ./C/put_varn_float.c
> > ./CXX/put_varn_float.cpp
> > ./F77/put_varn_real.f
> > ./F90/put_varn_real.f90
> >
> >
> > Wei-keng
> >
> > On Sep 26, 2014, at 10:52 AM, Jim Edwards wrote:
> >
> > > Wei-king,
> > >
> > >
> > > There is a discrepancy between the documentation and the code with
> respect to the varn functions.   The documentation at
> http://cucis.ece.northwestern.edu/projects/PNETCDF/doc/pnetcdf-c/ncmpi_005fput_005fvarn_005f_003ctype_003e.html
> has:
> > >
> > > int ncmpi_put_varn_all        (int               ncid,
> > >                                int               varid,
> > >                                int               num,
> > >                                const MPI_Offset  starts[num][],
> > >                                const MPI_Offset  counts[num][],
> > >                                const void       *bufs[num],
> > >                                MPI_Offset        bufcounts[num],
> > >                                MPI_Datatype      buftypes[num]);
> > >
> > >
> > >
> > > While the source trunk has:
> > >
> > >
> > > int ncmpi_put_varn_all(int ncid, int varid, int num, MPI_Offset* const
> starts[],
> > >               MPI_Offset* const counts[], const void *buf, MPI_Offset
> bufcount,
> > >               MPI_Datatype buftype);
> > >
> > > The last three arguments are not arrays.
> > >
> > > - Jim
> > >
> > >
> > >
> > > On Wed, Sep 24, 2014 at 6:44 PM, Wei-keng Liao <
> wkliao at eecs.northwestern.edu> wrote:
> > >
> > > If the data is contiguous in memory, then there is no need to use varm
> or flexible APIs.
> > >
> > > There is a new set of APIs named varn (available in PnetCDF version
> 1.4.0 and later), eg.
> > >     ncmpi_put_varn_float_all()
> > > It allows a single API call to write a contiguous buffer to a set of
> noncontiguous places in file.
> > > Each noncontiguous place is specified by a (start, count) pair. The
> start-count pairs can be
> > > arbitrary in file offsets (i.e. unsorted order in offsets).
> > > Please note this API family is blocking. There is no nonblocking
> counterpart.
> > >
> > > In term of performance, this call is equivalent to making multiple
> iput or bput calls.
> > >
> > > Wei-keng
> > >
> > > On Sep 24, 2014, at 6:58 PM, Jim Edwards wrote:
> > >
> > > > Data is contiguous in memory but data on a given task maps to
> various non contiguous points in the file.   I can guarantee that the data
> in memory on a given mpi task is in monotonically increasing order with
> respect to offsets into the file, but not more than that.
> > > >
> > > > On Wed, Sep 24, 2014 at 3:43 PM, Wei-keng Liao <
> wkliao at eecs.northwestern.edu> wrote:
> > > > Hi, Jim
> > > >
> > > > Do you mean the local I/O buffer contains a list of non-contiguous
> data in memory?
> > > > Or do you mean "distributed" as data is partitioned across multiple
> MPI processes?
> > > >
> > > > The varm APIs and the "flexible" APIs that take an MPI derived
> datatype argument
> > > > are for users to describe non-contiguous data in the local I/O
> buffer. The imap
> > > > and MPI datatype argument has no effect to the data access in files.
> So, I need
> > > > to know which case you are referring to first.
> > > >
> > > > Thanks for pointing out the error in the user guide. It is fixed.
> > > >
> > > > Wei-keng
> > > >
> > > > On Sep 24, 2014, at 2:30 PM, Jim Edwards wrote:
> > > >
> > > > > I want to write a distributed variable to a file and the way the
> > > > > data is distributed is fairly random with respect to the ordering
> on the file.
> > > > >
> > > > > It seems like I can do several things from each task in order to
> write the data -
> > > > >
> > > > >       • I can specify several blocks of code using start and count
> and make mulitple calls on each task to ncmpi_bput_vara_all
> > > > >       • I can define an MPI derived type and make a single call to
> ncmpi_bput_var_all on each task
> > > > >       • I (think I) can use ncmpi_bput_varm_all and specify an
> imap  (btw: the pnetcdf users guide has this interface wrong)
> > > > > Are any of these better from a performance standpoint?
> > > > >
> > > > > Thanks,
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Jim Edwards
> > > > >
> > > > > CESM Software Engineer
> > > > > National Center for Atmospheric Research
> > > > > Boulder, CO
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Jim Edwards
> > > >
> > > > CESM Software Engineer
> > > > National Center for Atmospheric Research
> > > > Boulder, CO
> > >
> > >
> > >
> > >
> > > --
> > > Jim Edwards
> > >
> > > CESM Software Engineer
> > > National Center for Atmospheric Research
> > > Boulder, CO
> >
> >
> >
> >
> > --
> > Jim Edwards
> >
> > CESM Software Engineer
> > National Center for Atmospheric Research
> > Boulder, CO
>
>


-- 
Jim Edwards

CESM Software Engineer
National Center for Atmospheric Research
Boulder, CO
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20140929/34fae1cc/attachment.html>


More information about the parallel-netcdf mailing list