Hi,<br><br>I found the problem here -- the variable I was passing as the 'status' parameter to mpi_recv was an integer, and not an array of integers of length MPI_STATUS_SIZE. Thanks to anyone who thought about this!<br>
<br>Chris<br><br><div class="gmail_quote">On Wed, Nov 19, 2008 at 1:14 PM, Christopher Gilbreth <span dir="ltr"><<a href="mailto:cngilbreth@gmail.com">cngilbreth@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
Hi,<br><br>I'm having a strange problem with derived types and mpich2 using certain compiler configurations. The attached Fortran 90 sample program (main3.f90) defines a type<br>
<br> type sample<br> sequence<br> complex*16 :: H<br> complex*16 :: rho(MAX_R_VALS)<br>end type sample<br><br>type(sample) :: s<br><br>and
then just tries to send s%H (not the entire struct, just the one
component) from process 1 to process 0. This works, but it seems that
the variable which I use to store the rank of the process (I call this
'iam'),<br>
<br>integer :: iam<br>! ...<br>call MPI_COMM_RANK(MPI_COMM_WORLD, iam, mpierr)<br><br>is
modified during mpi_recv in process 0, depending on the compiler that I use and the options I pass. On my machine, with mpich2-1.0.8, iam
is 0 before mpi_recv and 1 afterward. I actually tried this with openmpi as well on a cluster and found a value of 16 afterward. *However*, if I comment out the line<br>
<br>complex*16 :: rho(MAX_R_VALS)<br><br>in type sample above, then the problem goes away on both machines.<br><br>The compilers I've been trying are:<br><br>- GNU Fortran (Ubuntu 4.3.2-1ubuntu11) 4.3.2<br>- ifort (IFORT) 10.1 20080801<br>
<br>gfortran gives the strange behavior with any compiler flags I've tried, including all -O flags.<br><br>ifort gives the strange behavior with -O2 and above, but not -O0 or -O1.<br><br>To compile and run:<br><br>mpif90 main3.f90 -o test<br>
mpiexec -l -n 2 ./test<br><br>Does anyone have any insight into this? Is there an error in my usage of the MPI calls?<br><br>Thanks,<br>Chris
</blockquote></div><br>