[petsc-dev] Bordered systems and low-rank corrections
Mark F. Adams
mark.adams at columbia.edu
Sun Nov 6 09:48:04 CST 2011
On Nov 5, 2011, at 6:19 PM, Jed Brown wrote:
> On Sat, Nov 5, 2011 at 15:23, Mark F. Adams <mark.adams at columbia.edu> wrote:
> Uzawa is a great algorithm, very popular, dead simple (two lines), embarrassingly robust and fast.
>
> It's only fast if the Schur complement is well-conditioned. This is true for some popular problems, but I'm less gushing over the algorithm. Bramble et al use the notation
>
> [A B'] X = F
> [B 0] Y = G
>
> Uzawa is algebraically equivalent to using the factorization
>
> [1 0] [A B']
> [BA^{-1} 1] [0 S]
>
> where S = B A^{-1} B' is the Schur complement, and using Richardson iteration (possibly preconditioned) to solve completely with S. This should make it clear why Uzawa only converges quickly if S is well-conditioned (or you have a very good preconditioner). Of course, we observe that Richardson rarely beats a Krylov method for other problems, so it makes sense to use a Krylov method when solving with S. This is sometimes called "accelerated Uzawa", or just Schur Complement Reduction.
>
let me back up and clarify some points:
If Richardson ever beats GMRES then you have a code bug. If your system is very well conditioned (eg, reducing the residual by an order of magnitude each iteration, as I see with unpreconditioned Uzawa and with multigrid when its working well), then Krylov does not help much in my experience. Even when it does help a little, I think you would want to look at the error norm to see how much it is really helping because Krylov does concentrate its optimization on the residual.
I see you did ask about "-pc_fieldsplit_type schur and Richardson" in your email. I was not trying to imply that (nontrivial) Krylov is never useful. I've had people ask me this at talks: 'why are you using Richardson when everyone knows that CG is cooler'. It would be nice to just use CG for social reasons of nothing else, and its cheap compared to the preconditioner for A anyway. And I think the Stokes Schur compliment (kind of like the contact problems that I have experience with) are very well conditioned -- Uzawa always converged with about on digit per iteration for me. This is pretty fast for unpreconditioned Richardson and I've seen on many test problems in solid mechanics contact.
> An alternative is to apply A^{-1} inexactly, perhaps only apply a preconditioner for S (instead of doing inner iterations) and run a Krylov method in the full space.
This seems like a good thing to have in the toolbox. The Bramble paper has this (eq, 2.5) as their ultimate linear algorithm as I recall, and I've never run just the two preconditioners in the loop (ie, a very approximate solve for A). In this case the outer iteration would look more like what your solve for A -- Krylov generally cheap compared to the other costs in each iteration so its something that you definitely want available.
> If you do this, the lower-triangular part of the factorization can usually be dropped because it does not affect the eigenvalues (all eigenvalues are 1, with minimal polynomial of degree 2). This is the philosophy of the "block preconditioners" crowd. It often wins over SCR when the fields are well-scaled, but SCR is more robust to radically different scales for the primal and dual variables.
Humm, I would not like (ie, hate) a method that is sensitive to scaling. eg, my ex56 runs the solver multiple times with large rescalings of the problem just to test that the solver is invariant to scaling -- so this property is in my regression tests. I would question wether this sensitivity to scaling (eg, units) can not be fixed for "block preconditioners" -- especially if I'm in the "block preconditioners" crowd :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111106/0894411d/attachment.html>
More information about the petsc-dev
mailing list