[petsc-users] Field Split PC for Fully-Coupled 3d stationary incompressible Navier-Stokes Solution Algorithm

Fabian Gabel gabel.fabian at gmail.com
Thu Feb 5 18:15:37 CST 2015


On Do, 2015-02-05 at 16:45 -0600, Matthew Knepley wrote:
> On Thu, Feb 5, 2015 at 4:15 PM, Fabian Gabel <gabel.fabian at gmail.com>
> wrote:
>         Thank you for your feedback.
>         
>         >         -coupledsolve_pc_type fieldsplit
>         >         -coupledsolve_pc_fieldsplit_0_fields 0,1,2
>         >         -coupledsolve_pc_fieldsplit_1_fields 3
>         >         -coupledsolve_pc_fieldsplit_type schur
>         >         -coupledsolve_pc_fieldsplit_block_size 4
>         >         -coupledsolve_fieldsplit_0_ksp_converged_reason
>         >         -coupledsolve_fieldsplit_1_ksp_converged_reason
>         >         -coupledsolve_fieldsplit_0_ksp_type gmres
>         >         -coupledsolve_fieldsplit_0_pc_type fieldsplit
>         >         -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size
>         3
>         >         -coupledsolve_fieldsplit_0_fieldsplit_0_pc_type ml
>         >         -coupledsolve_fieldsplit_0_fieldsplit_1_pc_type ml
>         >         -coupledsolve_fieldsplit_0_fieldsplit_2_pc_type ml
>         >
>         >         Is it normal, that I have to explicitly specify the
>         block size
>         >         for each
>         >         fieldsplit?
>         >
>         >
>         > No. You should be able to just specify
>         >
>         >
>         > -coupledsolve_fieldsplit_ksp_converged
>         > -coupledsolve_fieldsplit_0_fieldsplit_pc_type ml
>         >
>         >
>         > and same options will be applied to all splits (0,1,2).
>         >
>         > Does this functionality not work?
>         >
>         >
>         It does work indeed, but what I actually was referring to, was
>         the use
>         of
>         
>         -coupledsolve_pc_fieldsplit_block_size 4
>         -coupledsolve_fieldsplit_0_pc_fieldsplit_block_size 3
>         
>         Without them, I get the error message
>         
>         [0]PETSC ERROR: PCFieldSplitSetDefaults() line 468
>         in /work/build/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c
>         Unhandled
>         case, must have at least two fields, not 1
>         
>         I thought PETSc would already know, what I want to do, since I
>         initialized the fieldsplit with
>         
>         CALL PCFieldSplitSetIS(PRECON,PETSC_NULL_CHARACTER,ISU,IERR)
>         
>         etc.
>         >
>         >
>         
>         >         Are there any guidelines to follow that I could use
>         to avoid
>         >         taking wild
>         >         guesses?
>         >
>         >
>         > Sure. There are lots of papers published on how to construct
>         robust
>         > block preconditioners for saddle point problems arising from
>         Navier
>         > Stokes.
>         > I would start by looking at this book:
>         >
>         >
>         >   Finite Elements and Fast Iterative Solvers
>         >
>         >   Howard Elman, David Silvester and Andy Wathen
>         >
>         >   Oxford University Press
>         >
>         >   See chapters 6 and 8.
>         >
>         As a matter of fact I spent the last days digging through
>         papers on the
>         regard of preconditioners or approximate Schur complements and
>         the names
>         Elman and Silvester have come up quite often.
>         
>         The problem I experience is, that, except for one publication,
>         all the
>         other ones I checked deal with finite element formulations.
>         Only
>         
>         Klaij, C. and Vuik, C. SIMPLE-type preconditioners for
>         cell-centered,
>         colocated finite volume discretization of incompressible
>         Reynolds-averaged Navier–Stokes equations
>         
>         presented an approach for finite volume methods. Furthermore,
>         a lot of
>         literature is found on saddle point problems, since the linear
>         system
>         from stable finite element formulations comes with a 0 block
>         as pressure
>         matrix. I'm not sure how I can benefit from the work that has
>         already
>         been done for finite element methods, since I neither use
>         finite
>         elements nor I am trying to solve a saddle point problem (?).
> 
> 
> I believe the operator estimates for FV are very similar to first
> order FEM, 

Ok, so you would suggest to just discretize the operators differently
(FVM instead of FEM discretization)? 

> and
> I believe that you do have a saddle-point system in that there are
> both positive 
> and negative eigenvalues.

A first test on a small system in Matlab shows, that my system matrix is
positive semi-definite but I am not sure how this result could be
derived in general form from the discretization approach I used.
> 
> 
>   Thanks,
> 
> 
>      Matt
>  
>         >
>         >         > Petsc has some support to generate approximate
>         pressure
>         >         schur
>         >         > complements for you, but these will not be as good
>         as the
>         >         ones
>         >         > specifically constructed for you particular
>         discretization.
>         >
>         >         I came across a tutorial
>         (/snes/examples/tutorials/ex70.c),
>         >         which shows
>         >         2 different approaches:
>         >
>         >         1- provide a Preconditioner \hat{S}p for the
>         approximation of
>         >         the true
>         >         Schur complement
>         >
>         >         2- use another Matrix (in this case its the Matrix
>         used for
>         >         constructing
>         >         the preconditioner in the former approach) as a new
>         >         approximation of the
>         >         Schur complement.
>         >
>         >         Speaking in terms of the PETSc-manual p.87, looking
>         at the
>         >         factorization
>         >         of the Schur field split preconditioner, approach 1
>         sets
>         >         \hat{S}p while
>         >         approach 2 furthermore sets \hat{S}. Is this
>         correct?
>         >
>         >
>         >
>         > No this is not correct.
>         > \hat{S} is always constructed by PETSc as
>         >   \hat{S} = A11 - A10 KSP(A00) A01
>         
>         But then what happens in this line from the
>         tutorial /snes/examples/tutorials/ex70.c
>         
>         ierr = KSPSetOperators(subksp[1], s->myS,
>         s->myS);CHKERRQ(ierr);
>         
>         It think the approximate Schur complement a (Matrix of type
>         Schur) gets
>         replaced by an explicitely formed Matrix (myS, of type
>         MPIAIJ).
>         >
>         > You have two choices in how to define the preconditioned,
>         \hat{S_p}:
>         >
>         > [1] Assemble you own matrix (as is done in ex70)
>         >
>         > [2] Let PETSc build one. PETSc does this according to
>         >
>         >   \hat{S_p} = A11 - A10 inv(diag(A00)) A01
>         >
>         Regards,
>         Fabian
>         >
>         >
>         
>         
> 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener




More information about the petsc-users mailing list