In an other word, can I define derived data type for two array slices with different data type and different slice size ?<br>For example,<br> <br> integer A(10)<br> double precision B(100)<br><br>slice array A to 10 parts as A(1) A(2) A(3) A(4) .. .....A(10)<br>
slice array B to 10 parts as B(1:10) B(11:20) B(21:30) B(31:40)......B(91:100)<br><br>Can I define a derived data type to describe each part?<br> C(1): A(1) B(1:10)<br> C(2): A(2) B(11:20)<br>...<br><br>Thank you for your any comment.<br>
<br><br><div class="gmail_quote">2010/5/7 Jilong Yin <span dir="ltr"><<a href="mailto:yinjilong@gmail.com">yinjilong@gmail.com</a>></span><br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Hello, <br><br> I am trying to define MPI derived data type to simplify my program.<br> <br> But I can not get it working properly.<br><br> First I describe the basic thing about the following test program.<br><br> There are many layers consisting by many nodes. Each node has a position(x,y,z) ( real data type )and a bound condition type description in (x,y,z) direction( integer data) .<br>
<br> I want to send i-th layer node data in a so-called buffer space to j-th layer node in a so-called computation space.<br><br> I first define two MPI derived data types to describing node layer data. And then using the new data type to exchange data between the two space.<br>
<br> The following is my FORTRAN code, though it can be compiled and run , the result is wrong.<br><br> Can anyone help me out? <br><br><br> Thank you .<br><br>JILONG YIN<br><br> 2010-05-07<br><br>[CODE]<br><br>C The following program is used to test MPI derived data type in parallel finite element program<br>
C The node data in buffer space will be send to computation space<br> PROGRAM TEST<br><br> IMPLICIT NONE<br><br> INCLUDE 'MPIF.H'<br><br>C The number of nodes on a crossection<br> INTEGER,PARAMETER::N_NOD_CROSEC=10<br>
C The number of nodes along the longitude direction<br> INTEGER,PARAMETER::N_NOD_STREAM=13<br>C The number of total nodes<br> INTEGER,PARAMETER::N_NOD=N_NOD_CROSEC*N_NOD_STREAM<br><br>C Derived data type for node layer data buffer space<br>
INTEGER NOD_LAY_DATA_BF(N_NOD_STREAM)<br>C The node position data in buffer space<br> DOUBLE PRECISION POSITION_BF(3,N_NOD)<br>C The node boundary condition type data in buffer space<br> INTEGER IFLAG_BD_COND_BF(3,N_NOD)<br>
<br>C Derived data type for node layer data computation space<br> INTEGER NOD_LAY_DATA(N_NOD_STREAM)<br>C The node position data in computation space<br> DOUBLE PRECISION POSITION(3,N_NOD)<br>C The node boundary condition type data in computation space<br>
INTEGER IFLAG_BD_COND(3,N_NOD)<br><br>C Some varibles for defining MPI derived data type<br> INTEGER IBLOCK(99),IDISP(99),ITYPE(99)<br> INTEGER NBLOCK<br><br>C Node data type(for buffer space and computation space)<br>
INTEGER NOD_LAY_DATA_BF_TYPE<br> INTEGER NOD_LAY_DATA_TYPE<br><br>C MPI function status return <br> INTEGER::ISTATUS(MPI_STATUS_SIZE),IERR<br>C My rank <br> INTEGER MYID<br><br>C source rank and destination rank ID<br>
INTEGER ID_SRC,ID_DEST<br>C node layer number for sending and receiving<br> INTEGER ISNS,IRNS<br><br> INTEGER I,J,NOD1,NOD2<br><br>C Initilize the MPI enviroment<br> CALL MPI_INIT(IERR)<br><br>C Get the rank ID<br>
CALL MPI_COMM_RANK(MPI_COMM_WORLD,MYID,IERR)<br><br>C----------------------------------------------------------<br>C Define node layer derived data type for buffer space<br> NBLOCK=0<br><br> NBLOCK=NBLOCK+1<br>
IBLOCK(NBLOCK)=1<br> ITYPE(NBLOCK)=MPI_INTEGER<br> CALL MPI_ADDRESS(NOD_LAY_DATA_BF,IDISP(NBLOCK),IERR)<br><br>C Node position<br> NBLOCK=NBLOCK+1<br> IBLOCK(NBLOCK)=3*N_NOD_CROSEC<br> ITYPE(NBLOCK)=MPI_DOUBLE_PRECISION<br>
CALL MPI_ADDRESS(POSITION_BF,IDISP(NBLOCK),IERR)<br><br>C Node boundary condition type<br> NBLOCK=NBLOCK+1<br> IBLOCK(NBLOCK)=3*N_NOD_CROSEC<br> ITYPE(NBLOCK)=MPI_INTEGER<br> CALL MPI_ADDRESS(IFLAG_BD_COND_BF,IDISP(NBLOCK),IERR)<br>
<br>C convert to relative address<br> DO I=NBLOCK,1,-1<br> IDISP(I)=IDISP(I)-IDISP(1)<br> END DO<br><br>C generate new derived data type<br> CALL MPI_TYPE_STRUCT(NBLOCK,IBLOCK,IDISP,ITYPE,<br> & NOD_LAY_DATA_BF_TYPE,IERR) <br>
CALL MPI_TYPE_COMMIT(NOD_LAY_DATA_BF_TYPE,IERR) <br><br>C--------------------------------------------------------- <br>C Define node layer derived data type for computation space<br>C <br> NBLOCK=0<br>
<br> NBLOCK=NBLOCK+1<br> IBLOCK(NBLOCK)=1<br> ITYPE(NBLOCK)=MPI_INTEGER<br> CALL MPI_ADDRESS(NOD_LAY_DATA,IDISP(NBLOCK),IERR)<br><br>C Node position<br> NBLOCK=NBLOCK+1<br> IBLOCK(NBLOCK)=3*N_NOD_CROSEC<br>
ITYPE(NBLOCK)=MPI_DOUBLE_PRECISION<br> CALL MPI_ADDRESS(POSITION,IDISP(NBLOCK),IERR)<br><br>C Node boundary condition type<br> NBLOCK=NBLOCK+1<br> IBLOCK(NBLOCK)=3*N_NOD_CROSEC<br> ITYPE(NBLOCK)=MPI_INTEGER<br>
CALL MPI_ADDRESS(IFLAG_BD_COND(1,1),IDISP(NBLOCK),IERR)<br><br>C convert to relative address<br> DO I=NBLOCK,1,-1<br> IDISP(I)=IDISP(I)-IDISP(1)<br> END DO<br><br>C generate new derived data type<br> CALL MPI_TYPE_STRUCT(NBLOCK,IBLOCK,IDISP,ITYPE,<br>
& NOD_LAY_DATA_TYPE,IERR) <br> CALL MPI_TYPE_COMMIT(NOD_LAY_DATA_TYPE,IERR) <br><br><br>CC---------------------------------------------------------<br>C Node data initilize for computation space<br>
<br> NOD_LAY_DATA(:)=0<br> POSITION(:,:)=0.0D0<br> IFLAG_BD_COND(:,:)=-1<br><br>C Prepare test data for buffer space<br> DO I=1,N_NOD_STREAM<br> NOD_LAY_DATA_BF(I)=I<br> END DO<br><br> DO I=1,N_NOD<br>
DO J=1,3<br> POSITION_BF(J,I)=J+I*10.0D0<br> IFLAG_BD_COND_BF(J,I)=J+I*10+90000000<br> END DO<br> END DO<br><br>C I will send the ISNS-th layer node data in buffer space to IRNS-th layer node data in computation space<br>
ISNS=1<br> IRNS=2<br><br>C This is the source rank id and destination rank id<br> ID_SRC=0<br> ID_DEST=1<br><br>C send node layer data using derived data type 1 <br> IF(MYID.EQ.ID_SRC) THEN<br> CALL MPI_SEND(NOD_LAY_DATA_BF(ISNS),1,NOD_LAY_DATA_BF_TYPE,<br>
& ID_DEST,123,MPI_COMM_WORLD,IERR)<br><br> END IF<br><br><br>C receive node layer data using derived data type 2<br> IF(MYID.EQ.ID_DEST) THEN<br> CALL MPI_RECV(NOD_LAY_DATA(IRNS),1,NOD_LAY_DATA_TYPE,<br>
& ID_SRC,123,MPI_COMM_WORLD,ISTATUS,IERR)<br> END IF<br><br> print*,'Myid=',MYID,'IERR=',ierr<br><br>C output the received data to verify them<br> IF(MYID.EQ.ID_DEST) THEN<br>
<br> PRINT*,ID_SRC,NOD_LAY_DATA_BF(ISNS),<br> & ID_DEST,NOD_LAY_DATA(IRNS)<br> <br> DO I=1,N_NOD_CROSEC<br> NOD1=I+(ISNS-1)*N_NOD_CROSEC<br> NOD2=I+(IRNS-1)*N_NOD_CROSEC<br><br> PRINT*,NOD1,POSITION_BF(1,NOD1),NOD2,POSITION(1,NOD2)<br>
PRINT*,NOD1,POSITION_BF(2,NOD1),NOD2,POSITION(2,NOD2)<br> PRINT*,NOD1,POSITION_BF(3,NOD1),NOD2,POSITION(3,NOD2)<br><br> PRINT*,NOD1,IFLAG_BD_COND_BF(1,NOD1),NOD2,IFLAG_BD_COND(1,NOD2)<br> PRINT*,NOD1,IFLAG_BD_COND_BF(2,NOD1),NOD2,IFLAG_BD_COND(2,NOD2)<br>
PRINT*,NOD1,IFLAG_BD_COND_BF(3,NOD1),NOD2,IFLAG_BD_COND(3,NOD2)<br><br> END DO<br><br> END IF<br><br><br>c <br> CALL MPI_TYPE_FREE(NOD_LAY_DATA_BF_TYPE,IERR) <br> CALL MPI_TYPE_FREE(NOD_LAY_DATA_TYPE,IERR) <br>
<br> CALL MPI_FINALIZE(IERR)<br><br><br> END<br><br><br><br>[/CODE]<br><br><br><br><br><br>
</blockquote></div><br>