Parallel netCDF F90 bindings

Marty Barnaby mlbarna at sandia.gov
Wed Mar 12 10:59:00 CDT 2008


I've just completed a modification to the NCMPI option of IOR, 
specifically so I could get benchmark results writing an arbitrarily 
large file to a substantial Lustre FS from only 128 processors.

IOR is written in C, and, due to my lack of recent experience with 
Fortran, I stuck with that.

I have some regard in migrating to Fortran, however, as I have an 
involvement with the CAM code from UCAR, where the have written a new 
I/O library on top of PNetCDF.

Could you describe the research or code on which you are working? I'm 
particularly interested in whether you have an abstract, I/0 layer; or 
is your I/O integrated throughout you code?


Marty Barnaby


robl at mcs.anl.gov wrote:
> Hi Brian
>
> I know you sent this to me a month ago, but I've been pretty busy and
> haven't had a chance to look at this yet.  Let's see if the wider
> parallel-netcdf mailing list has some feedback for you.
>
>
> On Tue, Feb 12, 2008 at 09:27:50PM -0600, bdtaylo1 at uiuc.edu wrote:
>   
>> Mr. Latham,
>>
>> Hello!  I am using the parallel netCDF library in a couple of
>> Fortran 90 CFD codes I've written for my research.  I wrote a F90
>> module that wraps a large portion of the F77 API provided by
>> parallel netCDF library.  I have done my best to follow the serial
>> netCDF F90 API in putting together this module.  I thought this
>> might be of interest to you.
>>
>> There are a good number of issues that need to be addressed, such as
>> how to define the required constants (I just copied them out of
>> "pnetcdf.inc"), the non- standard but widely supported usage of
>> byte-storage variable specification (integer*1, integer*2, real*4,
>> etc), and how to obtain the MPI_OFFSET_TYPE constant (either via
>> configure/preprocessor tricks, via the "use mpi" statement as I have
>> done, or via "include 'mpif.h'").  If you're interested, I can give
>> you a complete rundown of the issues I've found.
>>
>> Feel free to contact me if you have any questions or if I can be of
>> further assistance.
>>
>> Thanks, Brian Taylor
>>     
>
> Thanks for the starting point.  Let's see if we can address the
> remaining issues.  Has anybody else on this list roled their own f90
> bindings?
>
> ==rob
>
>
> --
> Rob Latham
> Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
> Argonne National Lab, IL USA                 B29D F333 664A 4280 315B
>   

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/parallel-netcdf/attachments/20080312/39b76d31/attachment.htm>


More information about the parallel-netcdf mailing list