[MPICH] Problems with mpicc

goodell at mcs.anl.gov goodell at mcs.anl.gov
Thu Jul 19 16:59:21 CDT 2007


Hi Jeff,

I don't have any specific ideas about why your mpicc might be hanging, but
when I run into this type of situation with other programs I usually
strace them.  You can try:

% strace mpicc code.c -o test

although if mpicc forks off child processes then you'll probably want to
pass the '-f' option to follow forks.  Alternatively, you can start your
mpicc command as usual and then open another shell.  In that shell, run a
'ps auxwwf' and look for your mpicc and any child processes that might be
hanging off of it.  Take that PID and run:

% strace -p <PID_HERE>

There's a good chance that you'll find your mpicc (or a child thereof) is
blocked on some system call, such as a read against a broken NFS mount. 
Or it might be stuck in a loop trying to access the same resource over and
over again.

Good luck,
-Dave

> I can't even seem to compile to create test. I just tried a
> different binary output name with the same result
> (it hung for about 5 minutes).
>
> Jeff
>
>> Just in case this might be the problem, I'll remind you that there is a
system program named "test" which does nothing but set the exit code. Are
you sure you aren't accidentally running this program instead of the one
compiled from code.c but linked as "test"?
>> Rusty
>> On Jul 19, 2007, at 3:47 PM, Jeffrey B. Layton wrote:
>>> Afternoon,
>>> I built mpich2-1.0.5p4 with g95 and gcc 3.4.3 (see below).
>>> Here's the configuration I used:
>>> ./configure  -prefix=/home/laytonj/bin/mpich2-1.0.5p4-g95  \
>>> --enable-f77 --enable-f90 --enable-romio --disable-mpe \
>>> --with-file-system="nfs+ufs"
>>> MPICH2 seemed to build and install just fine. Then I tried
>>> compiling one of the non-MPI-IO codes from the MPI-2
>>> book (see below). When I compile:
>>> mpicc code.c -o test
>>> it just hangs for a long time (I've let it sit for over 20 mins.). The
load stays around 1.0 and the memory usage doesn't
>>> increase during the whole time.
>>> I know this one is a toughy. Any ideas?
>>> Thanks!
>>> Jeff
>>> gcc configuration:
>>> % gcc -v
>>> Reading specs from /usr/lib/gcc/i386-redhat-linux/3.4.3/specs
>>> Configured with: ../configure --prefix=/usr --mandir=/usr/share/man
--infodir=/usr/share/info --enable-shared --enable-threads=posix
--disable-checking --with-system-zlib --enable-__cxa_atexit
>>> --disable-libunwind-exceptions --enable-java-awt=gtk
>>> --host=i386-redhat-linux
>>> Thread model: posix
>>> gcc version 3.4.3 20050227 (Red Hat 3.4.3-22.1)
>>> sample code:
>>> /* example of sequential Unix write into a common file */
>>> #include "mpi.h"
>>> #include <stdio.h>
>>> #define BUFSIZE 100
>>> int main(int argc, char *argv[])
>>> {
>>>    int i, myrank, numprocs, buf[BUFSIZE];
>>>    MPI_Status status;
>>>    FILE *myfile;
>>>    MPI_Init(&argc, &argv);
>>>    MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
>>>    MPI_Comm_size(MPI_COMM_WORLD, &numprocs);
>>>    for (i=0; i<BUFSIZE; i++)
>>>        buf[i] = myrank * BUFSIZE + i;
>>>    if (myrank != 0)
>>>        MPI_Send(buf, BUFSIZE, MPI_INT, 0, 99, MPI_COMM_WORLD);
>>>    else {
>>>        myfile = fopen("testfile", "w");
>>>        fwrite(buf, sizeof(int), BUFSIZE, myfile);
>>>        for (i=1; i<numprocs; i++) {
>>>            MPI_Recv(buf, BUFSIZE, MPI_INT, i, 99, MPI_COMM_WORLD,
>>>                     &status);
>>>            fwrite(buf, sizeof(int), BUFSIZE, myfile);
>>>        }
>>>        fclose(myfile);
>>>    }
>>>    MPI_Finalize();
>>>    return 0;
>>> }
>
>









More information about the mpich-discuss mailing list