[MPICH] Problems with mpicc

Jeffrey B. Layton laytonjb at charter.net
Thu Jul 19 15:47:44 CDT 2007


Afternoon,

I built mpich2-1.0.5p4 with g95 and gcc 3.4.3 (see below).
Here's the configuration I used:

./configure  -prefix=/home/laytonj/bin/mpich2-1.0.5p4-g95  \
 --enable-f77 --enable-f90 --enable-romio --disable-mpe \
 --with-file-system="nfs+ufs"

MPICH2 seemed to build and install just fine. Then I tried
compiling one of the non-MPI-IO codes from the MPI-2
book (see below). When I compile:

mpicc code.c -o test

it just hangs for a long time (I've let it sit for over 20 mins.).
The load stays around 1.0 and the memory usage doesn't
increase during the whole time.

I know this one is a toughy. Any ideas?

Thanks!

Jeff

gcc configuration:
% gcc -v
Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.3/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man 
--infodir=/usr/share/info --enable-shared --enable-threads=posix 
--disable-checking --with-system-zlib --enable-__cxa_atexit 
--disable-libunwind-exceptions --enable-java-awt=gtk 
--host=i386-redhat-linux
Thread model: posix
gcc version 3.4.3 20050227 (Red Hat 3.4.3-22.1)

sample code:
/* example of sequential Unix write into a common file */
#include "mpi.h"
#include <stdio.h>
#define BUFSIZE 100

int main(int argc, char *argv[])
{
    int i, myrank, numprocs, buf[BUFSIZE];
    MPI_Status status;
    FILE *myfile;

    MPI_Init(&argc, &argv);
    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
    for (i=0; i<BUFSIZE; i++)
        buf[i] = myrank * BUFSIZE + i;
    if (myrank != 0)
        MPI_Send(buf, BUFSIZE, MPI_INT, 0, 99, MPI_COMM_WORLD);
    else {
        myfile = fopen("testfile", "w");
        fwrite(buf, sizeof(int), BUFSIZE, myfile);
        for (i=1; i<numprocs; i++) {
            MPI_Recv(buf, BUFSIZE, MPI_INT, i, 99, MPI_COMM_WORLD,
                     &status);
            fwrite(buf, sizeof(int), BUFSIZE, myfile);
        }
        fclose(myfile);
    }
    MPI_Finalize();
    return 0;
}






More information about the mpich-discuss mailing list