[MPICH] Getting BLACS/SCALAPACK running with mpich2

Steve Kargl sgk at troutmask.apl.washington.edu
Tue Nov 14 16:09:49 CST 2006


I've installed MPICH2 on a small cluster running
FreeBSD.  The Fortran compiler is gfortran-4.2.0.
In setting up BLACS (to eventually use SCALAPACK),
I've runs its test program

node10:kargl[212] mpiexec -n 4 ./xFbtest
BLACS WARNING 'No need to set message ID range due to MPI communicator.'
from {-1,-1}, pnum=0, Contxt=-1, on line 18 of file 'blacs_set_.c'.

BLACS WARNING 'No need to set message ID range due to MPI communicator.'
from {-1,-1}, pnum=1, Contxt=-1, on line 18 of file 'blacs_set_.c'.

BLACS WARNING 'No need to set message ID range due to MPI communicator.'
from {-1,-1}, pnum=2, Contxt=-1, on line 18 of file 'blacs_set_.c'.

BLACS WARNING 'No need to set message ID range due to MPI communicator.'
from {-1,-1}, pnum=3, Contxt=-1, on line 18 of file 'blacs_set_.c'.

{0,2}, pnum=2, Contxt=0, killed other procs, exiting with error #-1.

rank 2 in job 17  node10.cimu.org_60092   caused collective abort of all ranks
  exit status of rank 2: killed by signal 9 
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2

I've search the web for information and possible patches, but 
came up empty.  Anyone have any comments?

PS: MPICH2 works on the cluster in that an application I wrote
that uses MPICH2 gives the desired result.

-- 
Steve




More information about the mpich-discuss mailing list