[petsc-users] ML and -pc_factor_shift_nonzero

Matthew Knepley knepley at gmail.com
Mon Apr 19 07:23:01 CDT 2010

On Mon, Apr 19, 2010 at 7:12 AM, Jed Brown <jed at 59a2.org> wrote:

> On Mon, 19 Apr 2010 06:34:08 -0500, Matthew Knepley <knepley at gmail.com>
> wrote:
> > For Schur complement methods, the inner system usually has to be
> > solved very accurately.  Are you accelerating a Krylov method for
> > A^{-1}, or just using ML itself? I would expect for the same linear
> > system tolerance, you get identical convergence for the same system,
> > independent of the number of processors.
> Matt, run ex48 with ML in parallel and serial, the aggregates are quite
> different and the parallel case doesn't converge with SOR.  Also, from
> talking with Ray, Eric Cyr, and John Shadid two weeks ago, they are
> currently using ML on coupled Navier-Stokes systems and usually beating
> block factorization (i.e. full-space iterations with
> approximate-commutator Schur-complement preconditioners (PCD or LSC
> variants) which are beating full Schur-complement reduction).  They are
> using Q1-Q1 with PSPG or Bochev stabilization and SUPG for advection.

So, to see if I understand correctly. You are saying that you can get away
more approximate solves if you do not do full reduction? I know the theory
the case of Stokes, but can you prove this in a general sense?

> The trouble is that this method occasionally runs into problems where
> convergence completely falls apart, despite not having extreme parameter
> choices.  ML has an option "energy minimization" which they are using
> (PETSc's interface doesn't currently support this, I'll add it if
> someone doesn't beat me to it) which is apparently crucial for
> generating reasonable coarse levels for these systems.

This sounds like the black magic I expect :)

> They always coarsen all the degrees of freedom together, this is not
> possible with mixed finite element spaces, so you have to trade quality
> answers produced by a stable approximation along with necessity to make
> subdomain and coarse-level problems compatible with inf-sup against the
> wiggle-room you get with stabilized non-mixed discretizations but with
> possible artifacts and significant divergence error.

I still maintain that aggregation is a really crappy way to generate coarse
especially for mixed elements. We should be generating coarse systems
and then using a nice (maybe Black-Box) framework for calculating good


> Jed

What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100419/4e9a7ea5/attachment-0001.htm>

More information about the petsc-users mailing list