Issue with "integer" arguments in Fortran API for PNetCDF

John Michalakes john at michalakes.us
Wed Oct 25 02:06:50 CDT 2006


PS... There is a problem with the pnetcdf.inc file itself.  This file
defines INTEGER parameters such as NF_UNLIMITED which may be used as
arguments to NFMPI_* routines that are defined as
INTEGER(KIND=MPI_OFFSET_KIND). An example is the 3rd argument of:

FORTRAN_API int FORT_CALL nfmpi_def_dim_ ( int *v1, char *v2
FORT_MIXED_LEN(d2), MPI_Offset *v3, MPI_Fint *v4 FORT_END_LEN(d2) )

Since MPI_Offset can be 64-bits if compiled with OBJECT_MODE 64 on the IBM,
passing NF_UNLIMITED (32-bits) as the third argument is incorrect.

John

> -----Original Message-----
> From: owner-parallel-netcdf at mcs.anl.gov
> [mailto:owner-parallel-netcdf at mcs.anl.gov]On Behalf Of John Michalakes
> Sent: Tuesday, October 24, 2006 3:30 PM
> To: parallel-netcdf at mcs.anl.gov
> Subject: Issue with "integer" arguments in Fortran API for PNetCDF
>
>
> I'm having a problem adapting parallel NetCDF as an option in the WRF
> (Weather Research and Forecast) model I/O API. WRF currently supports
> non-parallel NetCDF, and we're now trying to extend the model to
> use the new
> parallel NetCDF code. Have run into a problem with some of the
> arguments in
> the API for parallel-NetCDF:
>
> The original NetCDF Fortran API defines lengths, starts, and
> other arguments
> as INTEGER. PnetCDF recasts some (but not all) of these arguments as
> MPI_Offset. We compile WRF with OBJECT_MODE 64 on the IBMs where
> KIND=MPI_OFFSET_KIND works out to 8 bytes. Thus, the calls in WRF
> that pass
> these arguments as 32-bit INTEGERs from Fortran are causing PNetCDF to see
> garbage in the high four bytes of the dummy argument. For example, the
> length argument to NFMPI_DEF_DIM ends up being 73 + a very-large-garbage
> value instead of 73.
>
> I was hoping for a relatively quick drop-in of PnetCDF into WRF
> I/O API but
> this now seems unlikely. There is a large amount of NetCDF
> interface code in
> WRF that will need to be gone through.  Or I can write a wrapper
> lib for all
> these routines.
>
> Has anyone done this already? What's needed is for the wrapper routine to
> accept INTEGER arguments and copy them to INTEGER(KIND=MPI_OFFSET_KIND) to
> be used as arguments to the actual NFMPI_* routines in PNetCDF and (in the
> case of return values) copy them back to INTEGER again.
>
> Not arguing, but curious: since PNetCDF is supposed to be a parallel
> implementation of the NetCDF API, why was the API altered this way to use
> arguments of type MPI_Offset instead of INTEGER? At least for what I'm
> trying to do, it is a significant impediment.
>
> I really appreciate that parallel NetCDF is being developed. It
> is very much
> needed, especially now as we try moving to Petascale. Kudos to the
> developers.
>
> John
>
> -----------------------------------------------------------
> John Michalakes, Software Engineer        michalak at ucar.edu
> NCAR, MMM Division                   voice: +1 303 497 8199
> 3450 Mitchell Lane                     fax: +1 303 497 8181
> Boulder, Colorado 80301 U.S.A.        cell: +1 720 209 2320
>         http://www.mmm.ucar.edu/individual/michalakes
> -----------------------------------------------------------
>




More information about the parallel-netcdf mailing list