use of mpi derived data types in nfmpi_iput_vara

Jim Edwards edwards.jim at gmail.com
Tue Feb 12 15:41:00 CST 2013


HI Wei-keng,

Thank you - that example was just what I needed.  (That and the pnetcdf
update)

I think that you have an error in your flex_f.F example though - although
arrays start at 1 in fortran
and start and count are 1 based wrt calling pnetcdf, the start array for
calling mpi_type_create_subarray
should be 0 based so I think that

 array_of_starts = ghost_len   not ghost_len+1 as you've written.

Jim

On Tue, Feb 12, 2013 at 1:13 PM, Wei-keng Liao
<wkliao at ece.northwestern.edu>wrote:

> Hi, Jim,
>
> I just now fixed a bug that is related to using the flexible APIs.
> Please see r1159. However, the error message you got does not seem
> to relate to this bug. Anyway, if you figure out the MPI derived data
> type issue, you will still encounter this bug. So, please apply this
> patch.
>
> I also added 2 new examples (C and Fortran) for using the flexible APIs.
> See examples/flex.c and examples/flex_f.F They both call
> MPI_Type_create_subarray() to create their buffer type.
> Please note that in Fortran, the array indices for MPI functions start
> with 1.
>
> Wei-keng
>
> On Feb 12, 2013, at 1:47 PM, Jim Edwards wrote:
>
> > Hi Rob,
> >
> > I want to make sure that my mpi datatype is correct so I am working on
> an mpi-io binary version of the
> > same case, once I have that working I'll try pnetcdf again and if it's
> still a problem I'll send you the
> > datatype dump.
> >
> > - Jim
> >
> > On Tue, Feb 12, 2013 at 9:26 AM, Rob Latham <robl at mcs.anl.gov> wrote:
> > On Tue, Feb 12, 2013 at 08:49:20AM -0700, Jim Edwards wrote:
> > > Are there any limitations to the data type passed to nfmpi_iput_vara?
>   I
> > > am trying to create a
> > > type using mpi_type_create_subarray and pass it to nfmpi_iput_vara and
> I am
> > > getting an error
> > > with traceback:
> > >
> > > #10 0x00002b06f5f90ce3 in PMPI_Pack () from
> > > /opt/ibmhpc/pe1209/mpich2/gnu/lib64/libmpich.so.3
> > > #11 0x00000000005c9d2f in ncmpii_data_repack ()
> > > #12 0x00000000005b4532 in ncmpii_igetput_varm ()
> > > #13 0x00000000005969ed in ncmpi_iput_vara ()
> > >
> > >
> > > Is this a possible bug in parallel-netcdf or are my assumptions about
> usage
> > > incorrect?
> >
> > Wow, an error from MPI_Pack... and one that causes a traceback.
> >
> > and it's IBM's MPI_Pack, sort of, by way of MPICH.
> >
> > So, you've caused some surprise here.
> >
> > Can you share the type with us?  Or, if it's hard to capture (because
> > it's generated algorithmically) you might, depending on how IBM built
> > their MPI, have access to the c routine MPIDU_Datatype_debug:
> >
> >  MPIDU_Datatype_debug(MPI_Datatype type, int array_ct)
> >
> > (128 should be big enough for array_ct)
> >
> > and that should spit out a bunch of information about the datatype:
> > enough that I could re-create it and see if other MPI implementations
> > choke when packing it.
> >
> > ==rob
> >
> > --
> > Rob Latham
> > Mathematics and Computer Science Division
> > Argonne National Lab, IL USA
> >
> >
> >
> > --
> > Jim Edwards
> >
> > CESM Software Engineering Group
> > National Center for Atmospheric Research
> > Boulder, CO
> > 303-497-1842
>
>


-- 

Jim Edwards
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20130212/b0e64b64/attachment.html>


More information about the parallel-netcdf mailing list