<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<HTML><HEAD>
<META http-equiv=Content-Type content="text/html; charset=us-ascii">
<META content="MSHTML 6.00.2900.2668" name=GENERATOR></HEAD>
<BODY>
<DIV dir=ltr align=left><SPAN class=316312120-28062005><FONT face=Arial
color=#0000ff size=2>MPICH-1.2.6 may not have the Fortran 90 module mpi.mod.
MPICH2 does.</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=316312120-28062005><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV>
<DIV dir=ltr align=left><SPAN class=316312120-28062005><FONT face=Arial
color=#0000ff size=2>Rajeev</FONT></SPAN></DIV>
<DIV dir=ltr align=left><SPAN class=316312120-28062005><FONT face=Arial
color=#0000ff size=2></FONT></SPAN> </DIV><BR>
<BLOCKQUOTE dir=ltr
style="PADDING-LEFT: 5px; MARGIN-LEFT: 5px; BORDER-LEFT: #0000ff 2px solid; MARGIN-RIGHT: 0px">
<DIV class=OutlookMessageHeader lang=en-us dir=ltr align=left>
<HR tabIndex=-1>
<FONT face=Tahoma size=2><B>From:</B> owner-mpich-discuss@mcs.anl.gov
[mailto:owner-mpich-discuss@mcs.anl.gov] <B>On Behalf Of </B>Stephen
Scott<BR><B>Sent:</B> Tuesday, June 28, 2005 3:17 PM<BR><B>To:</B> Stephen
Scott<BR><B>Cc:</B> mpich-discuss@mcs.anl.gov<BR><B>Subject:</B> Re: [MPICH]
ROMIO MPI_FILE_WRITE ALL<BR></FONT><BR></DIV>
<DIV></DIV>Dear list again,<BR><BR>I may have found the problem. If I change
'USE mpi' to 'INCLUDE 'mpif.h' the program appears to run and compile without
errors. Does anyone have an explanation for
this?<BR><BR>Cheers,<BR><BR>Steve<BR><BR>On 28 Jun 2005, at 19:52, Stephen
Scott wrote:<BR><BR>
<BLOCKQUOTE>Dear list,<BR><BR>I have a question/problem related to the use
of MPI_FILE_WRITE_ALL from the ROMIO package. I want to write a distributed
4D fortran array to file using the ROMIO library. The array is divided along
the third dimension. A section of the code is listed below. <BR><BR>The
purpose of the subroutine is to write the 4D fluid_uk(:,:,:,:) distributed
array to file on node 0. However, I get a compile time error for the call to
MPI_FILE_WRITE_ALL - 'There is no matching specific subroutine for this
generic subroutine call. [MPI_FILE_WRITE_ALL]'<BR><BR>I presumed that
MPI_FILE_WRITE_ALL would accept an array of any dimension but it appears I
am wrong. I would be most grateful for any feedback or suggestions for the
list!<BR><BR>Thanks in advance!<BR><BR>Steve<BR><BR>mpich-1.2.6<BR>intel
compilers 8.0<BR>Red Hat Enterprise Linux WS release 3 (Taroon Update
1)<BR><BR><BR><?fontfamily><?param Courier>SUBROUTINE
fluid_restart_write(time,ierr)<BR><BR>USE precision<BR>USE fluid_arrays,
ONLY : fluid_uk<BR>USE domain_params, ONLY : ni,nj,nk<BR>USE
mpi<BR><BR>IMPLICIT NONE<BR><BR>INTEGER,INTENT(INOUT) ::
ierr<BR>INTEGER,INTENT(IN) :: time<BR>INTEGER :: myrank<BR>INTEGER ::
nprocs<BR>INTEGER*8 :: disp=0<BR>CHARACTER(len=100) :: tstep
<BR>CHARACTER(len=10) :: execute_date<BR>CHARACTER(len=10) ::
execute_time<BR>INTEGER,PARAMETER :: MASTER = 0<BR><BR>INTEGER ::
gsizes(4)<BR>INTEGER :: distribs(4)<BR>INTEGER :: dargs(4)<BR>INTEGER ::
psizes(4)<BR>INTEGER :: local_size<BR>INTEGER ::
PANDORA_RESTART_TYPE<BR>INTEGER :: PANDORA_RESTART_FILE<BR>INTEGER ::
PANDORA_COMM<BR>INTEGER :: status(MPI_STATUS_SIZE)<BR><BR>CALL
MPI_COMM_RANK(MPI_COMM_WORLD,myrank,ierr)<BR>CALL
MPI_COMM_SIZE(MPI_COMM_WORLD,nprocs,ierr)<BR><BR>gsizes =
(/ni,nj,nk,3/)<BR>distribs = MPI_DISTRIBUTE_BLOCK<BR>dargs =
MPI_DISTRIBUTE_DFLT_DARG<BR>psizes = (/1,1,nprocs,1/)<BR><BR>CALL
MPI_TYPE_CREATE_DARRAY(nprocs,myrank,4,gsizes,distribs,dargs,psizes,
&<BR>MPI_ORDER_FORTRAN,MPI_DOUBLE_COMPLEX,PANDORA_RESTART_TYPE,ierr)<BR><BR>CALL
MPI_TYPE_COMMIT(PANDORA_RESTART_TYPE,ierr)<BR><BR>! fname_frestart defined
earlier in module<BR><BR>CALL
MPI_FILE_OPEN(MPI_COMM_WORLD,fname_frestart,MPI_MODE_WRONLY+MPI_MODE_CREATE,
&<BR>MPI_INFO_NULL,PANDORA_RESTART_FILE,ierr)<BR><BR>CALL
MPI_FILE_SET_VIEW(PANDORA_RESTART_FILE,disp,MPI_DOUBLE_COMPLEX,PANDORA_RESTART_TYPE,
&<BR>"native",MPI_INFO_NULL,ierr)<BR><BR>!
fluid_uk(ni,nj,nk/nprocs,3)<BR>local_size = ni*nj*(nk/nprocs)*3<BR><BR>CALL
MPI_FILE_WRITE_ALL(PANDORA_RESTART_FILE,fluid_uk,local_size,MPI_DOUBLE_COMPLEX,status,ierr)<BR><BR>CALL
MPI_FILE_CLOSE(PANDORA_RESTART_FILE,ierr)<BR><BR>END SUBROUTINE<BR><?/fontfamily></BLOCKQUOTE></BLOCKQUOTE></BODY></HTML>