Issue with "integer" arguments in Fortran API for PNetCDF

William Gropp gropp at mcs.anl.gov
Wed Oct 25 11:21:04 CDT 2006


Actually, MPI_Offset can be 64 bits, even on 32-bit platforms, if the  
file system accepts 64 bit offsets (e.g., has something like lseek64).

What would you suggest as a name that indicated that the offset value  
was a Fortran integer rather than an MPI_Offset-sized integer?

Bill

On Oct 25, 2006, at 11:05 AM, John Michalakes wrote:

> That would be helpful. But I think putting 32 in the name would be
> misleading, since this is only an issue on 64 bit compiles.  (On 32  
> bit
> compiles, MPI_Offset is the same length as a Fortran INTEGER).
>
> John
>
>> -----Original Message-----
>> From: William Gropp [mailto:gropp at mcs.anl.gov]
>> Sent: Wednesday, October 25, 2006 9:51 AM
>> To: john at michalakes.us
>> Cc: parallel-netcdf at mcs.anl.gov
>> Subject: Re: Issue with "integer" arguments in Fortran API for  
>> PNetCDF
>>
>>
>> As the Fortran interfaces are generated with a script, we could
>> create an alternate version that was closer to the serial NetCDF
>> interface, using a different prefix to avoid confusion (e.g., maybe
>> nf32mpi_xxx).  Would that help?  What do others think of that  
>> approach?
>>
>> Bill
>>
>> On Oct 25, 2006, at 2:06 AM, John Michalakes wrote:
>>
>>> PS... There is a problem with the pnetcdf.inc file itself.  This  
>>> file
>>> defines INTEGER parameters such as NF_UNLIMITED which may be used as
>>> arguments to NFMPI_* routines that are defined as
>>> INTEGER(KIND=MPI_OFFSET_KIND). An example is the 3rd argument of:
>>>
>>> FORTRAN_API int FORT_CALL nfmpi_def_dim_ ( int *v1, char *v2
>>> FORT_MIXED_LEN(d2), MPI_Offset *v3, MPI_Fint *v4 FORT_END_LEN(d2) )
>>>
>>> Since MPI_Offset can be 64-bits if compiled with OBJECT_MODE 64 on
>>> the IBM,
>>> passing NF_UNLIMITED (32-bits) as the third argument is incorrect.
>>>
>>> John
>>>
>>>> -----Original Message-----
>>>> From: owner-parallel-netcdf at mcs.anl.gov
>>>> [mailto:owner-parallel-netcdf at mcs.anl.gov]On Behalf Of John
>>>> Michalakes
>>>> Sent: Tuesday, October 24, 2006 3:30 PM
>>>> To: parallel-netcdf at mcs.anl.gov
>>>> Subject: Issue with "integer" arguments in Fortran API for PNetCDF
>>>>
>>>>
>>>> I'm having a problem adapting parallel NetCDF as an option in  
>>>> the WRF
>>>> (Weather Research and Forecast) model I/O API. WRF currently  
>>>> supports
>>>> non-parallel NetCDF, and we're now trying to extend the model to
>>>> use the new
>>>> parallel NetCDF code. Have run into a problem with some of the
>>>> arguments in
>>>> the API for parallel-NetCDF:
>>>>
>>>> The original NetCDF Fortran API defines lengths, starts, and
>>>> other arguments
>>>> as INTEGER. PnetCDF recasts some (but not all) of these  
>>>> arguments as
>>>> MPI_Offset. We compile WRF with OBJECT_MODE 64 on the IBMs where
>>>> KIND=MPI_OFFSET_KIND works out to 8 bytes. Thus, the calls in WRF
>>>> that pass
>>>> these arguments as 32-bit INTEGERs from Fortran are causing
>>>> PNetCDF to see
>>>> garbage in the high four bytes of the dummy argument. For example,
>>>> the
>>>> length argument to NFMPI_DEF_DIM ends up being 73 + a very-large-
>>>> garbage
>>>> value instead of 73.
>>>>
>>>> I was hoping for a relatively quick drop-in of PnetCDF into WRF
>>>> I/O API but
>>>> this now seems unlikely. There is a large amount of NetCDF
>>>> interface code in
>>>> WRF that will need to be gone through.  Or I can write a wrapper
>>>> lib for all
>>>> these routines.
>>>>
>>>> Has anyone done this already? What's needed is for the wrapper
>>>> routine to
>>>> accept INTEGER arguments and copy them to INTEGER
>>>> (KIND=MPI_OFFSET_KIND) to
>>>> be used as arguments to the actual NFMPI_* routines in PNetCDF and
>>>> (in the
>>>> case of return values) copy them back to INTEGER again.
>>>>
>>>> Not arguing, but curious: since PNetCDF is supposed to be a  
>>>> parallel
>>>> implementation of the NetCDF API, why was the API altered this way
>>>> to use
>>>> arguments of type MPI_Offset instead of INTEGER? At least for what
>>>> I'm
>>>> trying to do, it is a significant impediment.
>>>>
>>>> I really appreciate that parallel NetCDF is being developed. It
>>>> is very much
>>>> needed, especially now as we try moving to Petascale. Kudos to the
>>>> developers.
>>>>
>>>> John
>>>>
>>>> -----------------------------------------------------------
>>>> John Michalakes, Software Engineer        michalak at ucar.edu
>>>> NCAR, MMM Division                   voice: +1 303 497 8199
>>>> 3450 Mitchell Lane                     fax: +1 303 497 8181
>>>> Boulder, Colorado 80301 U.S.A.        cell: +1 720 209 2320
>>>>         http://www.mmm.ucar.edu/individual/michalakes
>>>> -----------------------------------------------------------
>>>>
>>>
>>
>




More information about the parallel-netcdf mailing list