[petsc-users] ML and -pc_factor_shift_nonzero

Jed Brown jed at 59a2.org
Mon Apr 19 07:12:06 CDT 2010

On Mon, 19 Apr 2010 06:34:08 -0500, Matthew Knepley <knepley at gmail.com> wrote:
> For Schur complement methods, the inner system usually has to be
> solved very accurately.  Are you accelerating a Krylov method for
> A^{-1}, or just using ML itself? I would expect for the same linear
> system tolerance, you get identical convergence for the same system,
> independent of the number of processors.

Matt, run ex48 with ML in parallel and serial, the aggregates are quite
different and the parallel case doesn't converge with SOR.  Also, from
talking with Ray, Eric Cyr, and John Shadid two weeks ago, they are
currently using ML on coupled Navier-Stokes systems and usually beating
block factorization (i.e. full-space iterations with
approximate-commutator Schur-complement preconditioners (PCD or LSC
variants) which are beating full Schur-complement reduction).  They are
using Q1-Q1 with PSPG or Bochev stabilization and SUPG for advection.

The trouble is that this method occasionally runs into problems where
convergence completely falls apart, despite not having extreme parameter
choices.  ML has an option "energy minimization" which they are using
(PETSc's interface doesn't currently support this, I'll add it if
someone doesn't beat me to it) which is apparently crucial for
generating reasonable coarse levels for these systems.

They always coarsen all the degrees of freedom together, this is not
possible with mixed finite element spaces, so you have to trade quality
answers produced by a stable approximation along with necessity to make
subdomain and coarse-level problems compatible with inf-sup against the
wiggle-room you get with stabilized non-mixed discretizations but with
possible artifacts and significant divergence error.


More information about the petsc-users mailing list