[mpich-discuss] mpich2-1.1.1p1/nemesis error
Bryan Putnam
bfp at purdue.edu
Mon Sep 21 09:04:44 CDT 2009
On Sat, 19 Sep 2009, Rajeev Thakur wrote:
> Thanks. We are looking into this bug.
>
> Rajeev
Rajeev,
Thanks. In case you need them, these are the configure options I used.
Bryan
$MPI_SRC/configure \
--with-device=ch3:nemesis \
--enable-fast \
--enable-threads \
--enable-debuginfo \
--enable-sharedlibs=gcc \
--enable-f77 \
--enable-f90 \
--enable-cxx \
--enable-romio \
--with-pm=mpd:hydra \
--without-mpe \
--prefix=$MPI_INSTALL/$CVER \
> configure_$CVER.log 2>&1
>
> > -----Original Message-----
> > From: mpich-discuss-bounces at mcs.anl.gov
> > [mailto:mpich-discuss-bounces at mcs.anl.gov] On Behalf Of Bryan Putnam
> > Sent: Monday, September 14, 2009 10:42 AM
> > To: mpich-discuss at mcs.anl.gov
> > Subject: [mpich-discuss] mpich2-1.1.1p1/nemesis error
> >
> > Hi All,
> >
> > We've run across a piece of C code that appears to run
> > successfully with mpich, mvapich2-1.4rc2, mpich2-1.1.1p1
> > (built with ssm or sockets), but fails with mpich2-1.1.1p1
> > (built with nemesis). The code is meant to be run with two
> > processors, and it either hangs or give an error message in
> > the case that the two processors are on separate nodes. It
> > appears to work OK if both processors are on the same node.
> > I've included it here for your enjoyment if you like to try it out.
> >
> > Thanks!
> > Bryan
> >
> > =======================================================
> >
> >
> >
> >
> >
> > #include <stdio.h>
> > #include <string.h>
> > #include <mpi.h>
> >
> > double array[256][256][512];
> >
> > int main(int argc, char *argv[])
> > {
> > int myrank;
> > MPI_Request req[2];
> > MPI_Status status1, status2, stats[2];
> > MPI_Datatype subarray1, subarray2, subarray3, subarray4;
> > int array_size[] = {256, 256, 512};
> > int array_subsize1[] = {128,128,512};
> > int array_subsize2[] = {128,128,512};
> >
> > int array_start1[] = {0, 0, 0};
> > int array_start2[] = {128, 128, 0};
> > int i;
> > double *ptr = (double *)array;
> >
> > MPI_Init(&argc, &argv);
> > MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
> >
> > printf("MPI_Init okay...\n");
> >
> > if(myrank == 0)
> > {
> > /* Create a subarray datatype */
> > MPI_Type_create_subarray(3, array_size,
> > array_subsize1, array_start1, MPI_ORDER_C, MPI_DOUBLE, &subarray1);
> > MPI_Type_commit(&subarray1);
> > MPI_Type_create_subarray(3, array_size,
> > array_subsize2, array_start2, MPI_ORDER_C, MPI_DOUBLE, &subarray2);
> > MPI_Type_commit(&subarray2);
> >
> > for(i = 0; i < 256*256*512; i++)
> > ptr[i] = 0.0;
> >
> > printf("data init ok\n");
> > fflush(stdout);
> >
> > MPI_Send(array, 1, subarray1, 1, 123, MPI_COMM_WORLD);
> >
> > printf("sent ok\n");
> > fflush(stdout);
> >
> > MPI_Recv(array, 1, subarray2, 1, 124,
> > MPI_COMM_WORLD, &stats[1]);
> >
> > printf("received ok\n");
> > printf("array: [0][0][0] = %f, [128][128][0]
> > = %f\n", array[0][0][0], array[128][128][0]);
> > fflush(stdout);
> > }
> > else
> > {
> > /* Create a subarray datatype */
> > MPI_Type_create_subarray(3, array_size,
> > array_subsize1, array_start1, MPI_ORDER_C, MPI_DOUBLE, &subarray3);
> > MPI_Type_commit(&subarray3);
> > MPI_Type_create_subarray(3, array_size,
> > array_subsize2, array_start2, MPI_ORDER_C, MPI_DOUBLE, &subarray4);
> > MPI_Type_commit(&subarray4);
> >
> > for(i = 0; i < 256*256*512; i++)
> > ptr[i] = 1.0;
> >
> > printf("data init ok\n");
> > fflush(stdout);
> >
> > MPI_Recv(array, 1, subarray3, 0, 123,
> > MPI_COMM_WORLD, &stats[0]);
> >
> > printf("received ok\n");
> > printf("array: [0][0][0] = %f, [128][128][0]
> > = %f\n", array[0][0][0], array[128][128][0]);
> > fflush(stdout);
> >
> > MPI_Send(array, 1, subarray4, 0, 124, MPI_COMM_WORLD);
> >
> > printf("sent ok\n");
> >
> > fflush(stdout);
> > }
> >
> > MPI_Finalize();
> > return 0;
> > }
> >
>
>
More information about the mpich-discuss
mailing list