[MPICH] Getting BLACS/SCALAPACK running with mpich2

Steve Kargl sgk at troutmask.apl.washington.edu
Tue Nov 14 21:20:55 CST 2006


I follow up on the scalapack list.  However, I'll note that
I ignored the warning and installed the BLACS library.  I
then built scalapack and its test program seem to work fine
with MPICH2.

-- 
steve

On Tue, Nov 14, 2006 at 09:16:37PM -0600, Rajeev Thakur wrote:
> I don't know. These warnings are coming from BLACS. You can try contacting
> the BLACS folks.
> 
> Rajeev
> > -----Original Message-----
> > From: owner-mpich-discuss at mcs.anl.gov 
> > [mailto:owner-mpich-discuss at mcs.anl.gov] On Behalf Of Steve Kargl
> > Sent: Tuesday, November 14, 2006 4:10 PM
> > To: mpich-discuss at mcs.anl.gov
> > Subject: [MPICH] Getting BLACS/SCALAPACK running with mpich2
> > 
> > I've installed MPICH2 on a small cluster running
> > FreeBSD.  The Fortran compiler is gfortran-4.2.0.
> > In setting up BLACS (to eventually use SCALAPACK),
> > I've runs its test program
> > 
> > node10:kargl[212] mpiexec -n 4 ./xFbtest
> > BLACS WARNING 'No need to set message ID range due to MPI 
> > communicator.'
> > from {-1,-1}, pnum=0, Contxt=-1, on line 18 of file 'blacs_set_.c'.
> > 
> > BLACS WARNING 'No need to set message ID range due to MPI 
> > communicator.'
> > from {-1,-1}, pnum=1, Contxt=-1, on line 18 of file 'blacs_set_.c'.
> > 
> > BLACS WARNING 'No need to set message ID range due to MPI 
> > communicator.'
> > from {-1,-1}, pnum=2, Contxt=-1, on line 18 of file 'blacs_set_.c'.
> > 
> > BLACS WARNING 'No need to set message ID range due to MPI 
> > communicator.'
> > from {-1,-1}, pnum=3, Contxt=-1, on line 18 of file 'blacs_set_.c'.
> > 
> > {0,2}, pnum=2, Contxt=0, killed other procs, exiting with error #-1.
> > 
> > rank 2 in job 17  node10.cimu.org_60092   caused collective 
> > abort of all ranks
> >   exit status of rank 2: killed by signal 9 
> > [cli_2]: aborting job:
> > application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
> > 
> > I've search the web for information and possible patches, but 
> > came up empty.  Anyone have any comments?
> > 
> > PS: MPICH2 works on the cluster in that an application I wrote
> > that uses MPICH2 gives the desired result.
> > 
> > -- 
> > Steve
> > 
> > 

-- 
Steve




More information about the mpich-discuss mailing list