dmmg_grid_sequence: KSP not functional?

Jed Brown jed at
Thu Apr 23 02:47:58 CDT 2009

On Wed 2009-04-22 21:44, Xuefeng Li wrote:
> It feels like KSP for the coarsest level is either not
> functional or a direct solver, whereas KSP for finer
> levels are iterative solvers. What is the KSP type
> associated with SNES on the coarsest level? Is the above
> observation by design in Petsc?

Yes, this is by design.  The whole point of multigrid is to be able to
propagate information globally in each iteration, while spending the
minimum effort to do it.  Usually this means that the global matrix is
small enough to be easily solved with a direct solver, often
redundantly.  Run with -dmmg_view and look under 'mg_coarse_' to see
what is running, and with '-help |grep mg_coarse' to see what you can
tune on the command line.

If you do iterations on the coarse level, you waste all your time on the
network because each processor has almost nothing to do.  If you use a
domain decomposition preconditioner that deteriorates with the number of
subdomains (anything without its *own* coarse-level solve) then you
can't observe optimal scaling.

The PETSc default is to use the 'preonly' KSP and a redundant direct
solve (every process gets the whole matrix coarse-level matrix and
solves it sequentially).  Sometimes it's not feasible to make the coarse
level small enough for such a direct solve to be effective (common with
complex geometry).  In this case, you can use
-mg_coarse_pc_redundant_number and a parallel direct solver (mumps,
superlu_dist, etc) to solve semi-redundantly (e.g. groups of 8
processors work together to factor the coarse-level matrix).

You are welcome to try an iterative solver on the coarse level,
redundantly or otherwise.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL: <>

More information about the petsc-users mailing list