[MPICH] ROMIO MPI_FILE_WRITE ALL
Stephen Scott
stephen.scott at imperial.ac.uk
Tue Jun 28 15:16:31 CDT 2005
Dear list again,
I may have found the problem. If I change 'USE mpi' to 'INCLUDE
'mpif.h' the program appears to run and compile without errors. Does
anyone have an explanation for this?
Cheers,
Steve
On 28 Jun 2005, at 19:52, Stephen Scott wrote:
> Dear list,
>
> I have a question/problem related to the use of MPI_FILE_WRITE_ALL
> from the ROMIO package. I want to write a distributed 4D fortran
> array to file using the ROMIO library. The array is divided along the
> third dimension. A section of the code is listed below.
>
> The purpose of the subroutine is to write the 4D fluid_uk(:,:,:,:)
> distributed array to file on node 0. However, I get a compile time
> error for the call to MPI_FILE_WRITE_ALL - 'There is no matching
> specific subroutine for this generic subroutine call.
> [MPI_FILE_WRITE_ALL]'
>
> I presumed that MPI_FILE_WRITE_ALL would accept an array of any
> dimension but it appears I am wrong. I would be most grateful for any
> feedback or suggestions for the list!
>
> Thanks in advance!
>
> Steve
>
> mpich-1.2.6
> intel compilers 8.0
> Red Hat Enterprise Linux WS release 3 (Taroon Update 1)
>
>
> SUBROUTINE fluid_restart_write(time,ierr)
>
> USE precision
> USE fluid_arrays, ONLY : fluid_uk
> USE domain_params, ONLY : ni,nj,nk
> USE mpi
>
> IMPLICIT NONE
>
> INTEGER,INTENT(INOUT) :: ierr
> INTEGER,INTENT(IN) :: time
> INTEGER :: myrank
> INTEGER :: nprocs
> INTEGER*8 :: disp=0
> CHARACTER(len=100) :: tstep
> CHARACTER(len=10) :: execute_date
> CHARACTER(len=10) :: execute_time
> INTEGER,PARAMETER :: MASTER = 0
>
> INTEGER :: gsizes(4)
> INTEGER :: distribs(4)
> INTEGER :: dargs(4)
> INTEGER :: psizes(4)
> INTEGER :: local_size
> INTEGER :: PANDORA_RESTART_TYPE
> INTEGER :: PANDORA_RESTART_FILE
> INTEGER :: PANDORA_COMM
> INTEGER :: status(MPI_STATUS_SIZE)
>
> CALL MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr)
> CALL MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr)
>
> gsizes = (/ni,nj,nk,3/)
> distribs = MPI_DISTRIBUTE_BLOCK
> dargs = MPI_DISTRIBUTE_DFLT_DARG
> psizes = (/1,1,nprocs,1/)
>
> CALL
> MPI_TYPE_CREATE_DARRAY(nprocs,myrank,4,gsizes,distribs,dargs,psizes, &
>
> MPI_ORDER_FORTRAN,MPI_DOUBLE_COMPLEX,PANDORA_RESTART_TYPE,ierr)
>
> CALL MPI_TYPE_COMMIT(PANDORA_RESTART_TYPE,ierr)
>
> ! fname_frestart defined earlier in module
>
> CALL
> MPI_FILE_OPEN(MPI_COMM_WORLD,fname_frestart,MPI_MODE_WRONLY+MPI_MODE_CR
> EATE, &
> MPI_INFO_NULL,PANDORA_RESTART_FILE,ierr)
>
> CALL
> MPI_FILE_SET_VIEW(PANDORA_RESTART_FILE,disp,MPI_DOUBLE_COMPLEX,PANDORA_
> RESTART_TYPE, &
> "native",MPI_INFO_NULL,ierr)
>
> ! fluid_uk(ni,nj,nk/nprocs,3)
> local_size = ni*nj*(nk/nprocs)*3
>
> CALL
> MPI_FILE_WRITE_ALL(PANDORA_RESTART_FILE,fluid_uk,local_size,MPI_DOUBLE_
> COMPLEX,status,ierr)
>
> CALL MPI_FILE_CLOSE(PANDORA_RESTART_FILE,ierr)
>
> END SUBROUTINE
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: text/enriched
Size: 3339 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20050628/3bb508f9/attachment.bin>
More information about the mpich-discuss
mailing list