<html>
<body>
At 03:16 PM 6/28/2005, Stephen Scott wrote:<br>
<blockquote type=cite class=cite cite="">Dear list again, <br><br>
I may have found the problem. If I change 'USE mpi' to 'INCLUDE
'mpif.h' the program appears to run and compile without errors.
Does anyone have an explanation for this? </blockquote><br>
The problem is that it is very difficult to write Fortran 90 modules that
allow arbitrary number of dimensions and types; in MPICH-1, we handle up
to 3 dimensional arrays for the standard, predefined types. As it
is, this creates a Fortran90 module file, independent of the code that
implements MPI, that is already larger than the entire MPI library.
This is a limitation in the implementation of the Fortran 90 module, but
it is one that is very difficult to overcome, given the design of Fortran
90. <br><br>
Bill<br><br>
<br>
<blockquote type=cite class=cite cite="">Cheers, <br><br>
Steve <br><br>
On 28 Jun 2005, at 19:52, Stephen Scott wrote: <br><br>
<blockquote type=cite class=cite cite="">Dear list, <br><br>
I have a question/problem related to the use of MPI_FILE_WRITE_ALL from
the ROMIO package. I want to write a distributed 4D fortran array
to file using the ROMIO library. The array is divided along the
third dimension. A section of the code is listed below.
<br><br>
The purpose of the subroutine is to write the 4D fluid_uk(:,:,:,:)
distributed array to file on node 0. However, I get a compile time
error for the call to MPI_FILE_WRITE_ALL - 'There is no matching
specific subroutine for this generic subroutine call.
[MPI_FILE_WRITE_ALL]' <br><br>
I presumed that MPI_FILE_WRITE_ALL would accept an array of any dimension
but it appears I am wrong. I would be most grateful for any
feedback or suggestions for the list! <br><br>
Thanks in advance! <br><br>
Steve <br><br>
mpich-1.2.6 <br>
intel compilers 8.0 <br>
Red Hat Enterprise Linux WS release 3 (Taroon Update 1) <br><br>
<br>
<font face="Courier, Courier">SUBROUTINE fluid_restart_write(time,ierr)
<br><br>
USE precision <br>
USE fluid_arrays, ONLY : fluid_uk <br>
USE domain_params, ONLY : ni,nj,nk <br>
USE mpi <br><br>
IMPLICIT NONE <br><br>
INTEGER,INTENT(INOUT)
:: ierr <br>
INTEGER,INTENT(IN)
:: time <br>
INTEGER
:: myrank <br>
INTEGER
:: nprocs <br>
INTEGER*8
:: disp=0 <br>
CHARACTER(len=100)
:: tstep <br>
CHARACTER(len=10)
:: execute_date <br>
CHARACTER(len=10)
:: execute_time <br>
INTEGER,PARAMETER
:: MASTER = 0 <br><br>
INTEGER
:: gsizes(4) <br>
INTEGER
:: distribs(4) <br>
INTEGER
:: dargs(4) <br>
INTEGER
:: psizes(4) <br>
INTEGER
:: local_size <br>
INTEGER
:: PANDORA_RESTART_TYPE <br>
INTEGER
:: PANDORA_RESTART_FILE <br>
INTEGER
:: PANDORA_COMM <br>
INTEGER
:: status(MPI_STATUS_SIZE) <br><br>
CALL MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr) <br>
CALL MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr) <br><br>
gsizes = (/ni,nj,nk,3/) <br>
distribs = MPI_DISTRIBUTE_BLOCK <br>
dargs = MPI_DISTRIBUTE_DFLT_DARG <br>
psizes = (/1,1,nprocs,1/) <br><br>
CALL
MPI_TYPE_CREATE_DARRAY(nprocs,myrank,4,gsizes,distribs,dargs,psizes,
& <br>
MPI_ORDER_FORTRAN,MPI_DOUBLE_COMPLEX,PANDORA_RESTART_TYPE,ierr) <br><br>
CALL MPI_TYPE_COMMIT(PANDORA_RESTART_TYPE,ierr) <br><br>
! fname_frestart defined earlier in module <br><br>
CALL
MPI_FILE_OPEN(MPI_COMM_WORLD,fname_frestart,MPI_MODE_WRONLY+MPI_MODE_CREATE,
& <br>
MPI_INFO_NULL,PANDORA_RESTART_FILE,ierr) <br><br>
CALL
MPI_FILE_SET_VIEW(PANDORA_RESTART_FILE,disp,MPI_DOUBLE_COMPLEX,PANDORA_RESTART_TYPE,
& <br>
"native",MPI_INFO_NULL,ierr) <br><br>
! fluid_uk(ni,nj,nk/nprocs,3) <br>
local_size = ni*nj*(nk/nprocs)*3 <br>
<br>
CALL
MPI_FILE_WRITE_ALL(PANDORA_RESTART_FILE,fluid_uk,local_size,MPI_DOUBLE_COMPLEX,status,ierr)
<br><br>
CALL MPI_FILE_CLOSE(PANDORA_RESTART_FILE,ierr) <br><br>
END SUBROUTINE </font></blockquote></blockquote></x-html>
</blockquote>
<x-sigsep><p></x-sigsep>
William Gropp<br>
<a href="http://www.mcs.anl.gov/~gropp" eudora="autourl">
http://www.mcs.anl.gov/~gropp</a></body>
</html>