Writing large files from pnetcdf on IBM

Robert Latham robl at mcs.anl.gov
Tue Nov 28 11:18:58 CST 2006


On Tue, Nov 28, 2006 at 09:45:32AM -0700, John Michalakes wrote:

> bluevista:/ptmp/michalak/run_hemi 157 > od -x wrfrst_d02_2005-04-02_18_00_00 |
> head
> 0000000  4801 6000 0000 0001 0000 0000 0000 0000
> 0000020  0000 0000 0000 0000 0000 0000 0000 0000

yeah, that file is definitely scrambled somehow

> (For comparison, with a file known to work:
> 
> bluevista:/ptmp/michalak/run_hemi 158 > od -x wrfrst_d01_2005-04-03_00_00_00 |
> head
> 0000000  4344 4601 0000 0001 0000 000a 0000 0009

So you can see the 'CDF1' bytes (43 44 46 01) in this file. 
 
> /* The number of bytes in a off_t */
> #define SIZEOF_OFF_T 8
> 
> /* The number of bytes in an MPI_Offset */
> #define SIZEOF_MPI_OFFSET 8

Maybe we have a problem in the way we implement the CDF-2 file format
on 64 bit systems?   Can you re-run this program without the
NF_64BIT_OFFSET flag and report what happens?  the CDF-1 file format
can be bigger than 2 gb, as long as the variables start somewhere in
the first 2 gb.

==rob

-- 
Rob Latham
Mathematics and Computer Science Division    A215 0178 EA2D B059 8CDF
Argonne National Lab, IL USA                 B29D F333 664A 4280 315B




More information about the parallel-netcdf mailing list