Preconditioning for saddle point problems

Boyce Griffith griffith at cims.nyu.edu
Tue Apr 29 12:28:34 CDT 2008


Hi, Matt et al. --

Do people ever use standard projection methods as preconditioners for 
these kinds of problems?

I have been playing around with doing this in the context of a staggered 
grid (MAC) finite difference scheme.  It is probably not much of a 
surprise, but for problems where an exact projection method is actually 
an exact Stokes solver (e.g., in the case of periodic boundary 
conditions), one can obtain convergence with a single application of the 
projection preconditioner when it is paired up with FGMRES.  I'm still 
working on implementing physical boundaries and local mesh refinement 
for this formulation, so it isn't clear how well this approach works for 
less trivial situations.

Thanks,

-- Boyce

Matthew Knepley wrote:
> 1) I believe the Wathen-Elman-Silvester stuff is the best out there of the
>     shelf. I love the review
> 
>      A.C. de Niet and F.W. Wubs "Two preconditioners for saddle point
> problems in fluid flows"
>      Int. J. Num. Meth. Fluids 2007: 54: 355-377
> 
> 2) Note in there that Augmented Lagrangian preconditioning (ala Axelsson) works
>     even better, but the system is harder to invert. I like these
> because you only need
>     one field in the formulation, instead of having a mixed system.
> This is detailed in
>     Brenner & Scott (Iterated Penalty method).
> 
> 3) If you want a mixed system, there is new code in PETSc
> (PCFieldSplit) to do exactly
>     what you want, and it works automatically with DAs. If you do it
> by hand, you provide
>     explicitly the IS for each field.
> 
>   Matt
> 
> On Tue, Apr 29, 2008 at 10:14 AM, Jed Brown <jed at 59a2.org> wrote:
>> On Tue 2008-04-29 10:44, Lisandro Dalcin wrote:
>>  > Well, I've worked hard on similar methods, but for incompressible NS
>>  > equations (pressure-convection preconditioners, Elman et al.). I
>>  > abandoned temporarily this research, but I was not able to get decent
>>  > results. However, for Stokes flow it seens to work endeed, but never
>>  > studied this seriously.
>>
>>  My experiments with the Stokes problem shows that it takes about four times as
>>  long to solve the indefinite Stokes system as it takes to solve a poisson
>>  problem with the same number of degrees of freedom.  For instance, in 3D with
>>  half a million degrees of freedom, the Stokes problem takes 2 minutes on my
>>  laptop while the poisson problem takes 30 seconds (both are using algebraic
>>  multigrid as the preconditioner).  Note that these tests are for a Chebyshev
>>  spectral method where the (unformed because it is dense) system matrix is
>>  applied via DCT, but a low-order finite difference or finite element
>>  approximation on the collocation nodes is used to obtain a sparse matrix with
>>  equivalent spectral properties, to which AMG is applied.  With a finite
>>  difference discretization (src/ksp/ksp/examples/tutorials/ex22.c) the same sized
>>  3D poisson problem takes 13 seconds with AMG and 8 with geometric multigrid.
>>  This is not a surprise since the conditioning of the spectral system is much
>>  worse, O(p^4) versus O(n^2), since the collocation nodes are quadratically
>>  clustered.
>>
>>  I've read Elman et al. 2002 ``Performance and analysis of saddle point
>>  preconditioners for the discrete steady-state Navier-Stokes equations'' but I
>>  haven't implemented anything there since I'm mostly interested in slow flow.
>>  Did your method work well for the Stokes problem, but poorly for NS?  I found
>>  that performance was quite dependent on the number of iterations at each level
>>  and the strength of the viscous preconditioner.  I thought my approach was
>>  completely naïve, but it seems to work reasonably well.  Certainly it is much
>>  faster than SPAI/ParaSails which is the alternative.
>>
>>
>>  > I'll comment you the degree of abstraction I could achieve. In my base
>>  > FEM code, I have a global [F, G; D C] matrix (I use stabilized
>>  > methods) built from standard linear elements and partitioned across
>>  > processors in a way inherited by the mesh partitioner (metis). So the
>>  > F, G, D, C entries are all 'interleaved' at each proc.
>>  >
>>  > In order to extract the blocks as parallel matrices from the goblal
>>  > saddle-point parallel matrix, I used MatGetSubmatrix, for this I
>>  > needed to build two index set for momentum eqs and continuity eqs
>>  > local at each proc but in global numbering. Those index set are the
>>  > only input required (apart from the global matrix) to build the
>>  > preconditioner.
>>
>>  This seems like the right approach.  I am extending my collocation approach to a
>>  hp-element version, so the code you wrote might be very helpful.  How difficult
>>  would it be to extend to the case where the matrices could be MatShell?  That
>>  is, to form the preconditioners, we only need entries for approximations S' and
>>  F' to S and F respectively; the rest can be MatShell.  In my case, F' is a
>>  finite difference or Q1 finite element discretization on the collocation nodes
>>  and S' is the mass matrix (which is the identity for collocation).
>>
>>  Would it be useful for me to strip my code down to make an example?  It's not
>>  parallel since it does DCTs of the entire domain, but it is a spectrally
>>  accurate, fully iterative solver for the 3D Stokes problem with nonlinear
>>  rheology.  I certainly learned a lot about PETSc while writing it and there
>>  aren't any examples which do something similar.
>>
>>  Jed
>>
> 
> 
> 




More information about the petsc-dev mailing list