[MPICH] mpiallgatherv in fortran

Rajeev Thakur thakur at mcs.anl.gov
Thu Jul 13 16:47:15 CDT 2006


The displs array should be 0, 2, 4. Then you will get the correct answer.
Displacement means the displacement from the starting address of the buffer
in units of the extent of the datatype, as described by the MPI_Recv(recvbuf
+ disp[i]*extent(recvtype), ...) in the definition of MPI_Gatherv.
 
Rajeev
 


  _____  

From: owner-mpich-discuss at mcs.anl.gov
[mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Andrew Hakman
Sent: Thursday, July 13, 2006 3:58 PM
To: mpich-discuss at mcs.anl.gov
Subject: [MPICH] mpiallgatherv in fortran


Hi

I'm using MPICH 1.2.5.2, and the portland group fortran compiler.

I'm having some issues with mpiallgatherv.

The first issue, is I can't seem to find any documentation about the details
of the displacement array - in fortran, should the first displacement be 1
or 0 (I suspect it should be 1 from testing, but I can't get the results I'm
expecting from either 1 or 0 - see below).

The second bigger issue, is I'm expecting the parts that are gathered from
the various processes should be assembled in process order - this does not
seem to happen. Probably the best way to illustrate what I'm really getting
at is with a small test program I wrote and it's output, and what I'm
expecting as output.

Here's the testprogram and header file
//////mpicoms.h///////
include 'mpif.h'
integer m_rank, m_size,status(MPI_STATUS_SIZE),mpi_err
common/MPIBLK/m_rank,m_size,mpi_err

/////mpitestallgatherv.for/////
      PROGRAM mpitest
      implicit none
      include 'mpicoms.h'

      integer mydata(2,1), alldata(2,3)
      integer recvcnts(3)
      integer displs(3)
      recvcnts(1)=2
      recvcnts(2)=2
      recvcnts(3)=2
      displs(1)=1
      displs(2)=3
      displs(3)=5
      CALL MPI_INIT(mpi_err)
      call MPI_COMM_SIZE(MPI_COMM_WORLD, m_size, mpi_err)
      call MPI_COMM_RANK(MPI_COMM_WORLD, m_rank, mpi_err)
      mydata(1,1)=m_rank+5
      mydata(2,1)=m_rank+6
      
      call MPI_ALLGATHERV(mydata, 2, MPI_INTEGER, alldata, recvcnts, displs,
     *MPI_INTEGER, MPI_COMM_WORLD, mpi_err)
      call MPI_FINALIZE(mpi_err)
      write(*,*) "this is node ", m_rank
      write(*,*) "mydata ="
      write(*,*) mydata(1,1)
      write(*,*) mydata(2,1)
      write(*,*) "alldata="
      write(*,*) mydata(1,1),' ',mydata(1,2),' ',mydata(1,3)
      write(*,*) mydata(2,1),' ',mydata(2,2),' ',mydata(2,3)
      stop
      end

The output I receive from running this on 3 processors (of which this little
test is rather hardcoded for) is the following (with some rearrangement and
adjusted spacing for readability):

 this is node 0
 mydata =
            5
            6
 alldata=
            5              5              6
            6              6              7

 this is node 1
 mydata =
            6
            7
 alldata=
            6              5              6
            7              6              7

 this is node 2
 mydata =
            7
            8
 alldata=
            7              5              6
            8              6              7


What I'm expecting is alldata on all 3 processes should look like this
 alldata=
            5              6              7
            6              7              8

i.e. be in process order (of course keeping in mind that arrays are column
major in fortran), and not duplicated.

Am I doing something wrong? Is this not the output I should be expecting
from allgatherv?

I also know that allgatherv isn't necessary with this example because all of
the processes send the same number of entries, but the real program I'm
working on, that isn't the case and the vector version of the call is
needed.

Thanks
Andrew Hakman


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/mpich-discuss/attachments/20060713/a567785f/attachment.htm>


More information about the mpich-discuss mailing list