[petsc-users] Scalable Solver for Incompressible Flow

Sebastian Blauth sebastian.blauth at itwm.fraunhofer.de
Tue May 2 12:26:42 CDT 2023



On 02.05.2023 15:12, Matthew Knepley wrote:
> On Tue, May 2, 2023 at 9:07 AM Blauth, Sebastian 
> <sebastian.blauth at itwm.fraunhofer.de 
> <mailto:sebastian.blauth at itwm.fraunhofer.de>> wrote:
> 
>     Hello,____
> 
>     __ __
> 
>     I am having a problem using / configuring PETSc to obtain a scalable
>     solver for the incompressible Navier Stokes equations. I am
>     discretizing the equations using FEM (with the library fenics) and I
>     am using the stable P2-P1 Taylor-Hood elements. I have read and
>     tried a lot regarding preconditioners for incompressible Navier
>     Stokes and I am aware that this is very much an active research
>     field, but maybe I can get some hints / tips. ____
> 
>     I am interested in solving large-scale 3D problems, but I cannot
>     even set up a scaleable 2D solver for the problems. All of my
>     approaches at the moment are trying to use a Schur Complement
>     approach, but I cannot get a “good” preconditioner for the Schur
>     complement matrix. For the velocity block, I am using the AMG
>     provided by hypre (which seems to work fine and is most likely not
>     the problem).____
> 
>     __ __
> 
>     To test the solver, I am using a simple 2D channel flow problem with
>     do-nothing conditions at the outlet.____
> 
>     __ __
> 
>     I am facing the following difficulties at the moment:____
> 
>     __ __
> 
>     - First, I am having trouble with using
>     -pc_fieldsplit_schur_precondition selfp. With this setup, the cost
>     for solving the Schur complement part in the fieldsplit
>     preconditioner (approximately) increase when the mesh is refined. I
>     am using the following options for this setup (note that I am using
>     exact solves for the velocity part to debug, but using, e.g., gmres
>     with hypre boomeramg reaches a given tolerance with a number of
>     iterations that is independent of the mesh)
> 
> 
> The diagonal of the momentum block is a bad preconditioner for the Schur 
> complement, because S is spectrally equivalent to the mass matrix. You 
> should build the mass matrix and use that as the preconditioning matrix 
> for the Schur part. The FEniCS people can show you how to do that. This 
> will provide mesh-independent convergence (you can see me doing this in 
> SNES ex69).
> 
>    Thanks,
> 
>       Matt

I agree with your comment for the Stokes equations - for these, I have 
already tried and used the pressure mass matrix as part of a (additive) 
block preconditioner and it gave mesh independent results.

However, for the Navier Stokes equations, is the Schur complement really 
spectrally equivalent to the pressure mass matrix? And even if it is, 
the convergence is only good for small Reynolds numbers, for moderately 
high ones the convergence really deteriorates. This is why I am trying 
to make fieldsplit_schur_precondition selfp work better (this is, if I 
understand it correctly, a SIMPLE type preconditioner).

Best regards,
Sebastian


> 
>          -ksp_type fgmres____
> 
>          -ksp_rtol 1e-6____
> 
>     -ksp_atol 1e-30____
> 
>          -pc_type fieldsplit____
> 
>          -pc_fieldsplit_type schur____
> 
>          -pc_fieldsplit_schur_fact_type full____
> 
>          -pc_fieldsplit_schur_precondition selfp____
> 
>          -fieldsplit_0_ksp_type preonly____
> 
>          -fieldsplit_0_pc_type lu____
> 
>          -fieldsplit_1_ksp_type gmres____
> 
>          -fieldsplit_1_ksp_pc_side right____
> 
>          -fieldsplit_1_ksp_max_it 1000____
> 
>          -fieldsplit_1_ksp_rtol 1e-1____
> 
>          -fieldsplit_1_ksp_atol 1e-30____
> 
>          -fieldsplit_1_pc_type lu____
> 
>          -fieldsplit_1_ksp_converged_reason____
> 
>          -ksp_converged_reason____
> 
>     __ __
> 
>     Note, that I use direct solvers for the subproblems to get an
>     “ideal” convergence. Even if I replace the direct solver with
>     boomeramg, the behavior is the same and the number of iterations
>     does not change much. ____
> 
>     In particular, I get the following behavior:____
> 
>     For a 8x8 mesh, I need, on average, 25 iterations to solve
>     fieldsplit_1____
> 
>     For a 16x16 mesh, I need 40 iterations____
> 
>     For a 32x32 mesh, I need 70 iterations____
> 
>     For a 64x64 mesh, I need 100 iterations____
> 
>     __ __
> 
>     However, the outer fgmres requires, as expected, always the same
>     number of iterations to reach convergence (as expected).____
> 
>     I do understand that the selfp preconditioner for the Schur
>     complement is expected to deteriorate as the Reynolds number
>     increases and the problem becomes more convective in nature, but I
>     had hoped that I can at least get a scaleable preconditioner with
>     respect to the mesh size out of it. Are there any tips on how to
>     achieve this?____
> 
>     __ __
> 
>     My second problem is concerning the LSC preconditioner. When I am
>     using this, again both with exact solves of the linear problems or
>     when using boomeramg, I do not get a scalable solver with respect to
>     the mesh size. On the contrary, here the number of solves required
>     for solving fieldsplit_1 to a fixed relative tolerance seem to
>     behave linearly w.r.t. the problem size. For this problem, I suspect
>     that the issue lies in the scaling of the LSC preconditioner
>     matrices (in the book of Elman, Sylvester and Wathen, the matrices
>     are scaled with the inverse of the diagonal velocity mass matrix).
>     Is it possible to achieve this with PETSc? I started experimenting
>     with supplying the velocity mass matrix as preconditioner matrix and
>     using “use_amat”, but I am not sure where / how to do it this way.____
> 
>     __ __
> 
>     And finally, more of an observation and question: I noticed that the
>     AMG approximations for the velocity block became worse with increase
>     of the Reynolds number when using the default options. However, when
>     using -pc_hypre_boomeramg_relax_weight_all 0.0 I noticed that
>     boomeramg performed way more robustly w.r.t. the Reynolds number.
>     Are there any other ways to improve the AMG performance in this
>     regard?____
> 
>     __ __
> 
>     Thanks a lot in advance and I am looking forward to your reply,____
> 
>     Sebastian____
> 
>     __ __
> 
>     --____
> 
>     Dr. Sebastian Blauth____
> 
>     Fraunhofer-Institut für____
> 
>     Techno- und Wirtschaftsmathematik ITWM____
> 
>     Abteilung Transportvorgänge____
> 
>     Fraunhofer-Platz 1, 67663 Kaiserslautern____
> 
>     Telefon: +49 631 31600-4968____
> 
>     sebastian.blauth at itwm.fraunhofer.de
>     <mailto:sebastian.blauth at itwm.fraunhofer.de>____
> 
>     https://www.itwm.fraunhofer.de <https://www.itwm.fraunhofer.de>____
> 
>     __ __
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their 
> experiments is infinitely more interesting than any results to which 
> their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>

-- 
Dr. Sebastian Blauth
Fraunhofer-Institut für
Techno- und Wirtschaftsmathematik ITWM
Abteilung Transportvorgänge
Fraunhofer-Platz 1, 67663 Kaiserslautern
Telefon: +49 631 31600-4968
sebastian.blauth at itwm.fraunhofer.de
www.itwm.fraunhofer.de


More information about the petsc-users mailing list