[MPICH2-dev] running out of fd's?
Rajeev Thakur
thakur at mcs.anl.gov
Wed Jan 30 13:18:18 CST 2008
You are running 100 processes on a single machine, right? What does the
"limit" command return? If it says descriptors=256, try bumping up the
number to 2048.
Rajeev
> -----Original Message-----
> From: owner-mpich2-dev at mcs.anl.gov
> [mailto:owner-mpich2-dev at mcs.anl.gov] On Behalf Of Nicholas Karonis
> Sent: Wednesday, January 30, 2008 1:06 PM
> To: mpich2-dev at mcs.anl.gov
> Cc: Nicholas Karonis; Brian Toonen
> Subject: [MPICH2-dev] running out of fd's?
>
> Hi,
>
> I just installed MPICH2-1.0.6 on Mac OS X 10.5.1 (i.e., Leopard).
> I configured it with the gforker and it was all compiled using
> Gnu's C and C++ compilers that came with the developer tools
> on the Mac OS X disk.
>
> The build seem to go OK and so I tried testing it with a small
> ring program (source at bottom). When I run the ring with -np 75
> all is OK but when I increase it to 100 I get an error message:
>
> /* running with 75, all OK */
> mpro% mpiexec -np 75 ring
> nprocs 75 received 75
>
> /* attempting to run with 100, problem :-( */
> mpro% mpiexec -np 100 ring
> Error in system call select: Bad file descriptor
> mpro%
>
> Any suggestions?
>
> Thanks in advance,
> Nick
>
>
> --- app source
> mpro% cat ring.c
> #include "mpi.h"
> #include <stdio.h>
> #include <stdlib.h>
>
> int main(int argc, char *argv[])
> {
> int nprocs, myid;
> int val;
> MPI_Status st;
>
> MPI_Init(&argc, &argv);
>
> MPI_Comm_size(MPI_COMM_WORLD, &nprocs);
>
> if (nprocs > 1)
> {
> MPI_Comm_rank(MPI_COMM_WORLD, &myid);
> if (myid == 0)
> {
> val = 1;
> MPI_Send(&val, 1, MPI_INT, 1, 0, MPI_COMM_WORLD);
> MPI_Recv(&val, 1, MPI_INT, nprocs-1, 0,
> MPI_COMM_WORLD, &st);
> printf("nprocs %d received %d\n", nprocs, val);
> }
> else
> {
> MPI_Recv(&val, 1, MPI_INT, myid-1, 0, MPI_COMM_WORLD, &st);
> val ++;
> MPI_Send(&val, 1, MPI_INT, (myid+1)%nprocs, 0,
> MPI_COMM_WORLD);
> } /* endif */
> } /* endif */
>
> MPI_Finalize();
> exit(0);
>
> } /* end main() */
> mpro%
>
>
More information about the mpich2-dev
mailing list