collective write with 1 dimension being global

Rob Latham robl at mcs.anl.gov
Wed Mar 9 21:16:06 CST 2011


On Sun, Mar 06, 2011 at 01:47:27PM +0300, Mark Cheeseman wrote:
> Hello,
> 
> I have a 4D variable inside a NetCDF file that I wish to distribute over a
> number of MPI tasks.  The variable will be decomposed over the first 3
> dimensions but not the fouth (i.e. the fourth dimension is kept global for
> all MPI tasks). In other words:
> 
>               GLOBAL_FIELD[nx,ny,nz,nv]  ==>
> LOCAL_FIELD[nx_local,ny_local,nz_local,nv]
> 
> I am trying to achieve via a nfmpi_get_vara_double_all call but the data
> keeps getting corrupted.  I am sure that my offsets and local domain sizes
> are correct.  If I modify my code to read only a single 3D slice (i.e. along
> 1 point in the fourth dimension), the code and input data are correct.
> 
> Can parallel-netcdf handle a local dimension being equal to a global
> dimension?  Or should I be using another call?

Hi: sorry for the delay.  Several of us are on travel this week. 

I think what you are trying to do is legal.  

Do you have a test case you could share?  Does writing exhibit the
same bug?  Does the C interface (either reading or writing)?

==rob

-- 
Rob Latham
Mathematics and Computer Science Division
Argonne National Lab, IL USA


More information about the parallel-netcdf mailing list