<html xmlns:v="urn:schemas-microsoft-com:vml" xmlns:o="urn:schemas-microsoft-com:office:office" xmlns:w="urn:schemas-microsoft-com:office:word" xmlns:m="http://schemas.microsoft.com/office/2004/12/omml" xmlns="http://www.w3.org/TR/REC-html40"><head><meta http-equiv=Content-Type content="text/html; charset=iso-8859-1"><meta name=Generator content="Microsoft Word 15 (filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:Wingdings;
panose-1:5 0 0 0 0 0 0 0 0 0;}
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0cm;
font-size:11.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri",sans-serif;
mso-fareast-language:EN-US;}
@page WordSection1
{size:612.0pt 792.0pt;
margin:70.85pt 70.85pt 2.0cm 70.85pt;}
div.WordSection1
{page:WordSection1;}
/* List Definitions */
@list l0
{mso-list-id:1567689550;
mso-list-type:hybrid;
mso-list-template-ids:1740681278 -51612426 67567619 67567621 67567617 67567619 67567621 67567617 67567619 67567621;}
@list l0:level1
{mso-level-start-at:0;
mso-level-number-format:bullet;
mso-level-text:-;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Calibri",sans-serif;
mso-fareast-font-family:Calibri;}
@list l0:level2
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";}
@list l0:level3
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
@list l0:level4
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Symbol;}
@list l0:level5
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";}
@list l0:level6
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
@list l0:level7
{mso-level-number-format:bullet;
mso-level-text:\F0B7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Symbol;}
@list l0:level8
{mso-level-number-format:bullet;
mso-level-text:o;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:"Courier New";}
@list l0:level9
{mso-level-number-format:bullet;
mso-level-text:\F0A7;
mso-level-tab-stop:none;
mso-level-number-position:left;
text-indent:-18.0pt;
font-family:Wingdings;}
ol
{margin-bottom:0cm;}
ul
{margin-bottom:0cm;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]--></head><body lang=DE link="#0563C1" vlink="#954F72" style='word-wrap:break-word'><div class=WordSection1><p class=MsoNormal>Hello,<o:p></o:p></p><p class=MsoNormal><o:p> </o:p></p><p class=MsoNormal><span lang=EN-US>I am having a problem using / configuring PETSc to obtain a scalable solver for the incompressible Navier Stokes equations. I am discretizing the equations using FEM (with the library fenics) and I am using the stable P2-P1 Taylor-Hood elements. I have read and tried a lot regarding preconditioners for incompressible Navier Stokes and I am aware that this is very much an active research field, but maybe I can get some hints / tips. <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>I am interested in solving large-scale 3D problems, but I cannot even set up a scaleable 2D solver for the problems. All of my approaches at the moment are trying to use a Schur Complement approach, but I cannot get a “good” preconditioner for the Schur complement matrix. For the velocity block, I am using the AMG provided by hypre (which seems to work fine and is most likely not the problem).<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>To test the solver, I am using a simple 2D channel flow problem with do-nothing conditions at the outlet.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>I am facing the following difficulties at the moment:<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>- First, I am having trouble with using -pc_fieldsplit_schur_precondition selfp. With this setup, the cost for solving the Schur complement part in the fieldsplit preconditioner (approximately) increase when the mesh is refined. I am using the following options for this setup (note that I am using exact solves for the velocity part to debug, but using, e.g., gmres with hypre boomeramg reaches a given tolerance with a number of iterations that is independent of the mesh)<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US> -ksp_type fgmres<o:p></o:p></span></p><p class=MsoNormal> -ksp_rtol 1e-6<o:p></o:p></p><p class=MsoNormal> <span lang=EN-US>-ksp_atol 1e-30<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -pc_type fieldsplit<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -pc_fieldsplit_type schur<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -pc_fieldsplit_schur_fact_type full<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -pc_fieldsplit_schur_precondition selfp<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_0_ksp_type preonly<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_0_pc_type lu<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_ksp_type gmres<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_ksp_pc_side right<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_ksp_max_it 1000<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_ksp_rtol 1e-1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_ksp_atol 1e-30<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_pc_type lu<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -fieldsplit_1_ksp_converged_reason<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US> -ksp_converged_reason<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Note, that I use direct solvers for the subproblems to get an “ideal” convergence. Even if I replace the direct solver with boomeramg, the behavior is the same and the number of iterations does not change much. <o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>In particular, I get the following behavior:<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>For a 8x8 mesh, I need, on average, 25 iterations to solve fieldsplit_1<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>For a 16x16 mesh, I need 40 iterations<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>For a 32x32 mesh, I need 70 iterations<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>For a 64x64 mesh, I need 100 iterations<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>However, the outer fgmres requires, as expected, always the same number of iterations to reach convergence (as expected).<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>I do understand that the selfp preconditioner for the Schur complement is expected to deteriorate as the Reynolds number increases and the problem becomes more convective in nature, but I had hoped that I can at least get a scaleable preconditioner with respect to the mesh size out of it. Are there any tips on how to achieve this?<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>My second problem is concerning the LSC preconditioner. When I am using this, again both with exact solves of the linear problems or when using boomeramg, I do not get a scalable solver with respect to the mesh size. On the contrary, here the number of solves required for solving fieldsplit_1 to a fixed relative tolerance seem to behave linearly w.r.t. the problem size. For this problem, I suspect that the issue lies in the scaling of the LSC preconditioner matrices (in the book of Elman, Sylvester and Wathen, the matrices are scaled with the inverse of the diagonal velocity mass matrix). Is it possible to achieve this with PETSc? I started experimenting with supplying the velocity mass matrix as preconditioner matrix and using “use_amat”, but I am not sure where / how to do it this way.<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>And finally, more of an observation and question: I noticed that the AMG approximations for the velocity block became worse with increase of the Reynolds number when using the default options. However, when using -pc_hypre_boomeramg_relax_weight_all 0.0 I noticed that boomeramg performed way more robustly w.r.t. the Reynolds number. Are there any other ways to improve the AMG performance in this regard?<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US>Thanks a lot in advance and I am looking forward to your reply,<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US>Sebastian<o:p></o:p></span></p><p class=MsoNormal><span lang=EN-US><o:p> </o:p></span></p><p class=MsoNormal><span lang=EN-US style='mso-fareast-language:DE'>--<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>Dr. Sebastian Blauth<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>Fraunhofer-Institut für<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>Techno- und Wirtschaftsmathematik ITWM<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>Abteilung Transportvorgänge<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>Fraunhofer-Platz 1, 67663 Kaiserslautern<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>Telefon: +49 631 31600-4968<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>sebastian.blauth@itwm.fraunhofer.de<o:p></o:p></span></p><p class=MsoNormal><span style='mso-fareast-language:DE'>https://www.itwm.fraunhofer.de<o:p></o:p></span></p><p class=MsoNormal><o:p> </o:p></p></div></body></html>