KSP/PC choice

Lisandro Dalcin dalcinl at gmail.com
Mon Jul 23 11:30:31 CDT 2007


On 7/23/07, Tim Kröger <tim at cevis.uni-bremen.de> wrote:
> Monolithic.  (I am not quite sure about this word but I assume it
> means that I don't try to decouple the two equations.)

That's the meaning for me (not sure if the word is completelly correct)

> > What kind of stabilization for advection and pressure are you using?
>
> Streamline diffusion (for advection).  I am not aware of the
> requirement to stabilize the pressure as well.

Well, you need to stabilize pressure if you have a non div-stable
finite element space (this happens when using linear, equal-order FE
for both velocity and pressure). Surely you are using a stable FE.

> Would you recommend to use a fractional step method?

No. In my opinion, (traditional) fractional step methods could be
problematic in many ways.

> Would you
> recommend a different stabilization method?  If you have any
> suggestion, please let me know.

I'm not exactly sure what do you understand for 'streamline difussion'
(is rathre general). But we normally use non div-stable FE pairs,
formulated with SUPG/PSPG (the first for advection, the second for
pressure) stabilization, as proposed by Tezduyar in many publications.
This method is residual-based (thus consistent). In particular, the
stabilization parameters take into accout the time step and the
compressibility contraint. This not only leads to good solutions, but
also to far better conditioned linear systems.

However, this perhaps will not solve all your problems. The linear
system have a saddle-point nature, and this is the source of all
trouble to solve it. Specialized preconditioners are usually needed,
among them:

* EBE/ CEBE: (clustered-)elemet-by-element, you can find references in
works from Hughes, Tezduyar. I've never tried them, as they seems to
require working at the element level.

* Block preconditioners, as suggested by Elman, Kay, Loghin, Wathen.
Those methods decouples the equations at the PC step. I am working on
this, as convergence deteriorate with increasing Reynolds.

* Finally, we have our on way. It is not completely scalable, but
usually works. It is related to interating only on the degree of
freedom on the inteface between processors. This is implemented in a
completelly algebraic manner, but relies in using a matrix tipe MATIS.
This code is included inside the distribution of my Python wrappers
for PETSc (petsc4py), but is coded in C, so usable in other scenarios.
I've never tried to include this is PETSc, because I am not sure how
much it can help to general users (in theory, iteration on interface
Schur complement should be in some way similar to ASM).

If you are interested in trying the my solution, I can help you. But I
cannot assert you will have good performance, specially in big 3D
problems. Anyway, we are currently using it to simulate the flow
around a racing car (Re=16e6) with about 600K nodes in 30 processors.

Regards,

-- 
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594




More information about the petsc-users mailing list