Writing large files from pnetcdf on IBM
Robert Latham
robl at mcs.anl.gov
Tue Nov 28 10:12:44 CST 2006
On Tue, Nov 28, 2006 at 05:50:00AM -0700, John Michalakes wrote:
> I'm running into problems creating and writing large files (> 2GB)
> from Fortran on an IBM Power5 (bluevista.ucar.edu). When the output
> file is less than 2GB it writes okay. When output is greater than
> 2GB, ncdump reports that the file is not a NetCDF file and od shows
> that a large section at the beginning of the file
> is all zeros.
What happens if you use the 'ncmpidump' tool that comes with
parallel-netcdf? (I'm curious if the problem is with the on-disk file
or with the dumping tool)
> I'm creating the file:
>
> stat = NFMPI_CREATE(Comm, FileName, IOR(NF_CLOBBER, NF_64BIT_OFFSET),
> MPI_INFO_NULL, DH%NCID)
>
> The value of stat is zero.
>
> Code is compiled with OBJECT_MODE set to 64, and I'm pretty sure all the
> relevant offset types are 64-bit. I'm using the version of parallel netcdf in
> bluevista:/contrib/netcdf . The file pnetcdf.h contains the string:
>
> /* "$Id: pnetcdf.h,v 1.20 2005/10/22 16:31:31 jianwli Exp $" */
>
> Any ideas?
One thing to check is the type of netcdf file you are creating. You
are already passing in the appropriate flag. I'd like to take a peek
at the first few bytes of the datafile header. One of these
approaches should do the trick:
$ xxd my_netcdf_file.nc | head
$ od -x my_netcdf_file.nc | head
$ hexdump my_netcdf_file.nc | head
Let us know what that gives you.
If you have access to the 'nccconfig.h' header, you should see
'#define SIZEOF_OFF_T 8' in there as well as
'#define SIZEOF_MPI_OFFSET 8'
Thanks for the report. We should be able to get this figured out
shortly.
==rob
--
Rob Latham
Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA B29D F333 664A 4280 315B
More information about the parallel-netcdf
mailing list