KSP/PC choice

Satish Balay balay at mcs.anl.gov
Tue Jul 24 10:03:44 CDT 2007


On Tue, 24 Jul 2007, owner-petsc-users at mcs.anl.gov wrote:

> Date: Tue, 24 Jul 2007 09:47:37 +0200 (CEST)
> From: =?iso-8859-15?Q?Tim_Kr=F6ger?= <tim at cevis.uni-bremen.de>
> X-X-Sender: tim at elektrik
> To: petsc-users at mcs.anl.gov
> Subject: BOUNCE petsc-users at mcs.anl.gov:     Message too long (>80000 chars)  

Its best to send installation issues involving configure.log to
petsc-maint at mcs.anl.gov - and not the list.

> Dear Matt,
> 
> On Mon, 23 Jul 2007, Matthew Knepley wrote:
> 
> > 1) Until you run out of memory, I would use sparse direct like MUMPS
> >
> > 2) After that, as long as you have the memory I would increase the
> > GMRES vectors, say to 50 or 100.
> >
> > 3) After that I would try LGMRES which generally converges better on these
> >   problems.
> 
> Thank you very much for your advice.  I tried 2 and 3, but they did 
> not solve the problem.  Using MUMPS fails since I am unable to 
> compile PETSc with MUMPS.  See the attached logfile.  What did I do 
> wrong?

--with-cc=gcc --with-fc=gfortran --with-shared=0 --download-mumps=1
--with-mpi-include=/home/tim/archives/packages/mpich-1.2.7/include/
--with-mpi-lib=/home/tim/archives/packages/mpich-1.2.7/lib/libmpich.a
--download-scalapack --download-blacs

The MPI [lib] specification is incomplete. It gives the following errors:

> /home/tim/archives/packages/petsc-2.3.3-p3/conftest.F:4: undefined reference to `mpi_init_'

What do you have for 'mpicc -show' and 'mpif90 -show'
If they are using gcc,gfortran, then use: 

./config/configure.py --with-mpi-dir=/home/tim/archives/packages/mpich-1.2.7
--download-blacs=1 --download-scalapack=1 --download-mumps=1

[this way mpicc/mpif90 wrappers get used, and they know how to resolve mpi_init_
correctly.]

Alternatively - just use the following - so that the MPI is also built with
compatible compilers.

--with-cc=gcc --with-fc=gfortran --download-mpich=1

Satish




More information about the petsc-users mailing list