Write a 3D array in Fortran using PnetCDF

Wei-keng Liao wkliao at ece.northwestern.edu
Fri Jan 25 12:31:07 CST 2013


Hi, MBR,
Glad it helps. There is no dumb question!
Thanks to your reporting, I have added a check in PnetCDF, so this
error is now caught in PnetCDF, before it can enter MPI-IO.

Wei-keng

On Jan 25, 2013, at 12:08 PM, MBR MBR wrote:

> Hi Wei-keng,
> 
> The problem was indeed the start... xpos, ypos & zpos started at 0, and I re-used what I did in parallel HDF5, where the "start" index start at 0 as in C.
> 
> Thanks, and sorry for the dumb question!
> 
> Best,
> 
> MBR
> 
> 
> On Fri, Jan 25, 2013 at 5:07 PM, MBR MBR <mbr.joos at gmail.com> wrote:
> Dear all,
> 
> I'm trying to use PnetCDF to store the variables (which are 3D arrays) of my algorithm. Good news is that it seems much simpler than PHDF5 (which I already implemented); bad news is that I have a MPI error at runtime I cannot understand.
> Please find both the error I get and the piece of code I wrote below. It seems to me that it should be quite simple, but I cannot figure out what is the problem.
> 
> Any help would be greatly appreciated!
> 
> Best,
> 
> MBR
> 
> The error I get is the following:
> [mypc:18412] *** An error occurred in MPI_Type_create_subarray
> [mypc:18412] *** on communicator MPI_COMM_WORLD
> [mypc:18412] *** MPI_ERR_ARG: invalid argument of some other kind
> [mpyc:18412] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
> 
> Here is a simplified version of the subroutine I wrote, which compiles without problems:
> 
> subroutine output(data,x,y,z,nout,myrank)
>   use pnetcdf
>   use mpi_var ! contains useful variables (idim, ni, etc. see below)
> 
>   implicit none
>   include 'mpif.h'
> 
>   ! idim,jdim,kdim are the number of elements in each dimension
>   real(dp),dimension(idim,jdim,kdim) :: data
>   real(dp), dimension(idim) :: x
>   real(dp), dimension(jdim) :: y
>   real(dp), dimension(kdim) :: z
>   integer :: nout, myrank
>   character(LEN=*) :: filename
> 
>   ! PnetCDF variables
>   integer(kind=MPI_OFFSET_KIND) :: nxtot, nytot, nztot
>   integer :: nout, ncid, xdimid, ydimid, zdimid
>   integer, dimension(3) :: sdimid
>   integer :: varid
>   integer(kind=MPI_OFFSET_KIND), dimension(3) :: dims, start, count
> 
>   ! Dimensions; ni,nj,nk are the number of MPI threads per direction
>   nxtot = nx*ni
>   nytot = ny*nj
>   nztot = nz*nk
> 
>   dims = (/ nx, ny, nz /)
>   
>   ! Get filename; nout is the number of the current output
>   call get_filename(nout, filename)
> 
>   ! Create filename
>   nout = nfmpi_create(MPI_COMM_WORLD, filename, NF_CLOBBER, MPI_INFO_NULL, ncid)
> 
>   ! Define dimensions
>   nout = nfmpi_def_dim(ncid, "x", nxtot, xdimid)
>   nout = nfmpi_def_dim(ncid, "y", nytot, ydimid)
>   nout = nfmpi_def_dim(ncid, "z", nztot, zdimid)
>   sdimid = (/ xdimid, ydimid, zdimid /)
> 
>   ! Create variable
>   nout = nfmpi_def_var(ncid, "var", NF_DOUBLE, 3, sdimid, varid)
> 
>   ! End of definitions
>   nout = nfmpi_enddef(ncid)
> 
>   ! {x,y,z}pos corresponds to the position of the MPI thread in a 3D cartesian grid
>   start = (/ xpos, ypos, zpos /)*dims
>   count = dims
> 
>   ! Write data
>   nout = nfmpi_put_vara_double_all(ncid, varid, start, count, data)
> 
>   ! Close file
>   nout = nfmpi_close(ncid)
> 
> end subroutine output
> 



More information about the parallel-netcdf mailing list