[mpich-discuss] Is MPI developed for Fortran as well as C?
James Dinan
dinan at mcs.anl.gov
Fri Apr 8 15:31:07 CDT 2011
Hi Hossein,
What's the reason for the columntype? It seems like you should be able
to express your struct directly with a call to MPI_Type_create_struct:
MPI_TYPE_CREATE_STRUCT(INTEGER COUNT, INTEGER ARRAY_OF_BLOCKLENGTHS(*),
INTEGER(KIND=MPI_ADDRESS_KIND) ARRAY_OF DISPLACEMENTS(*),
INTEGER ARRAY_OF_TYPES(*), INTEGER NEWTYPE, INTEGER IERROR)
count = 4
array_of_blocklengths = { 7, 11, 2, 2*6 }
array_of_displacements = { Parcle.IW1, Particle.IMP4, Particle.nomix,
Particle.zj(0) } (use MPI_Get_address())
array_of_types = { MPI_INTEGER4, MPI_INTEGER_2, MPI_LOGICAL, MPI_INTEGER4 }
newtype = particletype
Best,
~Jim.
On 4/8/11 5:11 AM, Hossein Beiramy wrote:
> Dear Chan,
>
> I use a PC with following specification:
>
> Windows 7 OS, 64 Bit, 4 GB of RAM, Intel® Core(TM)2Duo CPU 3.34 GHz
>
> In my small project, I want to send and receive an array of data type.
> In the following, I have explained my small project. Source code also is
> attached.
>
> I defined */Particle/* data type that contains 4 INTEGER4, 11 INTEGER2,
> 2 LOGICAL and 2 INTEGER4 array with size 6. To define an MPI data type
> for the */particle /*data type, first I defined */columntype/* type for
> INTEGER4 array with size 6. Then I structured */particletype/* and
> committed it. I defined two array (*/p/*,*/particles/*) of type
> */particletype/*and allocated them:
>
> *ALLOCATE*(particles(0:NELEM));*ALLOCATE*(p(0:1010)).
>
> I initialized */particles/*array. For sending specific segments of
> */particles /*array from server process (rank=0) to array */P/*on Client
> processes, I defined */indextype/*MPI data type. I want to get the array
> on client processes without gap and continuancely.
>
> On the Client process last cell have not received truly. Even in sending
> one cell result are as following:
>
> D:\Console1\Debug>mpiexec -n 2 Console1.exe
>
> rank=1p=17818 F
> T-842150451-842150451-842150451-842150451-842150451-842150451
>
> Previously a similar question has been mention about sending structured
> data of structures. This discussion also is attached.
> --- On *Thu, 4/7/11, Anthony Chan /<chan at mcs.anl.gov>/* wrote:
>
>
> From: Anthony Chan <chan at mcs.anl.gov>
> Subject: Re: [mpich-discuss] Is MPI developed for Fortran as well as C?
> To: mpich-discuss at mcs.anl.gov
> Date: Thursday, April 7, 2011, 8:36 PM
>
>
>
> ----- Original Message -----
> > Dear all,
> > Is MPI developed for Fortran as well as C?
> >
>
> MPI standard defines a Fortran binding which is provided by MPICH2.
> Do you have trouble in compiling/running the sample code or you have
> trouble install MPICH2.
>
> A.Chan
>
> >
> >
> > I work on a Fortran project and I want to do massage passing works
> > between nodes with MPI routines. I was able to send and receive
> simple
> > arrays but for sending and receiving an array of data type, it
> did not
> > work. In the attached files I have sent my previous Email that I had
> > sent to MPICH Discuss mailing list. In the Email I have reported
> > examination result of an example that I have gotten from Prof.
> > zkovacs. In that Email someone else, also have discussed. Prof.
> > zkovacs had wanted to send and receive an array of data type in C
> > programming language. I want to do similar work in Fortran.
> >
> >
> >
> >
> >
> >
> > Please give your opinion or in case that development of MPICH for
> > Fortran is not completed, introduce other implementation of MPI for
> > Fortran.
> > Another question. I want to write equivalent code of following C code
> > in Fortran. C code works truly but Fortran code does not. What is the
> > problem?
> > Attached files contain *.pdf file of previous Email and following
> > example. Also source code files are attached.
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > #include "mpi.h"
> > #include <stdio.h>
> > #include <stdlib.h>
> >
> > int main (int argc, char *argv[])
> > {
> > int position, i, j, a[2], myrank, num_proc;
> > char buff[1000];
> > MPI_Status stat;
> > MPI_Init(&argc, &argv);
> > MPI_Comm_size(MPI_COMM_WORLD, &num_proc );
> > MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
> > if (myrank == 0)
> > {
> > /* SENDER CODE */
> > i = 1; j = 2; a[0] = 3;
> > printf( "Proc %d: sending %u %u %u.th portion to proc 0.\n", myrank,
> > i, j, a );
> > position = 0;
> > MPI_Pack(&i, 1, MPI_INT, buff, 1000, &position, MPI_COMM_WORLD);
> > MPI_Pack(&j, 1, MPI_INT, buff, 1000, &position, MPI_COMM_WORLD);
> > MPI_Send( buff, position, MPI_PACKED, 1, 0, MPI_COMM_WORLD);
> > }
> > else /* RECEIVER CODE */
> > {
> > MPI_Recv( a, 2, MPI_INT, 0, 0, MPI_COMM_WORLD,&stat);
> > printf( "Proc %d: reciving %u %u %u.th portion to proc 0.\n", myrank,
> > i, j, a[0] );
> > }
> > MPI_Finalize();
> > return 0;
> > }
> >
> >
> >
> > program main
> > implicit none
> > include 'mpif.h'
> > integer a(0:1)
> > integer position, i, j, a(0:1), rank, numtasks;
> > character buff(0:1000);
> > integer stat(MPI_STATUS_SIZE)
> > call MPI_INIT(ierr)
> > call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
> > call MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr)
> > if (rank == 0) then
> > !/* SENDER CODE */
> > position = 0;
> > i=1;j=2;
> > CALL MPI_Pack(i, 1, MPI_INTEGER, buff, 1000, position,
> MPI_COMM_WORLD,
> > ierr);
> > CALL MPI_Pack(j, 1, MPI_INTEGER, buff, 1000, position,
> MPI_COMM_WORLD,
> > ierr);
> > CALL MPI_Send( buff, position, MPI_PACKED, 1, 0, MPI_COMM_WORLD,
> > ierr);
> > else !/* RECEIVER CODE */
> > CALL MPI_Recv( a, 2, MPI_INTEGER, 0, 0, MPI_COMM_WORLD, ierr);
> > position = 0;
> > CALL MPI_Unpack(a, 2, position, i, 1, MPI_INTEGER, MPI_COMM_WORLD,
> > ierr);
> > CALL MPI_Unpack(a, 2, position, j, 1, MPI_INTEGER, MPI_COMM_WORLD,
> > ierr);
> > Write(*,*) 'i , j = ' , i,j
> > END IF
> > call MPI_FINALIZE(ierr)
> > end program main
> >
> >
> >
> > Best Regards,
> > --
> > Hossein Beyrami
> >
> >
> >
> >
> > _______________________________________________
> > mpich-discuss mailing list
> > mpich-discuss at mcs.anl.gov
> <http://us.mc1617.mail.yahoo.com/mc/compose?to=mpich-discuss@mcs.anl.gov>
> > https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> <http://us.mc1617.mail.yahoo.com/mc/compose?to=mpich-discuss@mcs.anl.gov>
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
>
>
>
>
> _______________________________________________
> mpich-discuss mailing list
> mpich-discuss at mcs.anl.gov
> https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss
More information about the mpich-discuss
mailing list