<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div dir="ltr"></div><div dir="ltr">OpenMP is definitely linked in and appears in the stacktrace but I haven’t asked for any threads (to my knowledge).</div><div dir="ltr"><br><blockquote type="cite">On Apr 13, 2023, at 7:03 PM, Mark Adams <mfadams@lbl.gov> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr">Are you using OpenMP? ("OMP").<div>If so try without it.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 13, 2023 at 5:07 PM Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com">alexlindsay239@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Here's the result.<div><br></div><div> 0 KSP unpreconditioned resid norm 1.033076851740e+03 true resid norm 1.033076851740e+03 ||r(i)||/||b|| 1.000000000000e+00<br> Residual norms for fieldsplit_u_ solve.<br> 0 KSP Residual norm -nan <br> Residual norms for fieldsplit_p_ solve.<br> 0 KSP Residual norm -nan <br> Residual norms for fieldsplit_u_ solve.<br> 0 KSP Residual norm -nan <br> 1 KSP Residual norm -nan <br> Residual norms for fieldsplit_u_ solve.<br> 0 KSP Residual norm -nan <br> Linear solve did not converge due to DIVERGED_PC_FAILED iterations 0<br> PC failed due to SUBPC_ERROR</div><div><br></div><div>I probably should have read the FAQ on `-fp_trap` before sending my first email. </div><div><br></div><div>Working with this stack trace</div><div><br></div><div> (gdb) bt<br></div>#0 0x00007fffe83a4286 in hypre_ParMatmul._omp_fn.1 () at par_csr_matop.c:1124<br>#1 0x00007ffff4982a16 in GOMP_parallel () from /lib/x86_64-linux-gnu/libgomp.so.1<br>#2 0x00007fffe83abfd1 in hypre_ParMatmul (A=<optimized out>, B=B@entry=0x55555da2ffa0) at par_csr_matop.c:967<br>#3 0x00007fffe82f09bf in hypre_BoomerAMGSetup (amg_vdata=<optimized out>, A=<optimized out>, f=<optimized out>, <br> u=<optimized out>) at par_amg_setup.c:2790<br>#4 0x00007fffe82d54f0 in HYPRE_BoomerAMGSetup (solver=<optimized out>, A=<optimized out>, b=<optimized out>, <br> x=<optimized out>) at HYPRE_parcsr_amg.c:47<br>#5 0x00007fffe940d33c in PCSetUp_HYPRE (pc=<optimized out>)<br> at /home/lindad/projects/moose/petsc/src/ksp/pc/impls/hypre/hypre.c:418<br>#6 0x00007fffe9413d87 in PCSetUp (pc=0x55555d5ef390)<br> at /home/lindad/projects/moose/petsc/src/ksp/pc/interface/precon.c:1017<br>#7 0x00007fffe94f856b in KSPSetUp (ksp=ksp@entry=0x55555d5eecb0)<br> at /home/lindad/projects/moose/petsc/src/ksp/ksp/interface/itfunc.c:408<br>#8 0x00007fffe94fa6f4 in KSPSolve_Private (ksp=ksp@entry=0x55555d5eecb0, b=0x55555d619730, x=<optimized out>)<br> at /home/lindad/projects/moose/petsc/src/ksp/ksp/interface/itfunc.c:852<br>#9 0x00007fffe94fd8b1 in KSPSolve (ksp=ksp@entry=0x55555d5eecb0, b=<optimized out>, x=<optimized out>)<br> at /home/lindad/projects/moose/petsc/src/ksp/ksp/interface/itfunc.c:1086<br>#10 0x00007fffe93d84a1 in PCApply_FieldSplit_Schur (pc=0x555555bef790, x=0x555556d5a510, y=0x555556d59e30)<br> at /home/lindad/projects/moose/petsc/src/ksp/pc/impls/fieldsplit/fieldsplit.c:1185<br>#11 0x00007fffe9414484 in PCApply (pc=pc@entry=0x555555bef790, x=x@entry=0x555556d5a510, y=y@entry=0x555556d59e30)<br> at /home/lindad/projects/moose/petsc/src/ksp/pc/interface/precon.c:445<br>#12 0x00007fffe9415ad7 in PCApplyBAorAB (pc=0x555555bef790, side=PC_RIGHT, x=0x555556d5a510, <br> y=y@entry=0x555556e922a0, work=0x555556d59e30)<br> at /home/lindad/projects/moose/petsc/src/ksp/pc/interface/precon.c:727<br>#13 0x00007fffe9451fcd in KSP_PCApplyBAorAB (w=<optimized out>, y=0x555556e922a0, x=<optimized out>, <br> ksp=0x555556068fc0) at /home/lindad/projects/moose/petsc/include/petsc/private/kspimpl.h:421<br>#14 KSPGMRESCycle (itcount=itcount@entry=0x7fffffffcca0, ksp=ksp@entry=0x555556068fc0)<br> at /home/lindad/projects/moose/petsc/src/ksp/ksp/impls/gmres/gmres.c:162<br>#15 0x00007fffe94536f9 in KSPSolve_GMRES (ksp=0x555556068fc0)<br> at /home/lindad/projects/moose/petsc/src/ksp/ksp/impls/gmres/gmres.c:247<br>#16 0x00007fffe94fb1c4 in KSPSolve_Private (ksp=0x555556068fc0, b=b@entry=0x55555568e510, x=<optimized out>, <br> x@entry=0x55555607cce0) at /home/lindad/projects/moose/petsc/src/ksp/ksp/interface/itfunc.c:914<br>#17 0x00007fffe94fd8b1 in KSPSolve (ksp=<optimized out>, b=b@entry=0x55555568e510, x=x@entry=0x55555607cce0)<br> at /home/lindad/projects/moose/petsc/src/ksp/ksp/interface/itfunc.c:1086<br>#18 0x00007fffe9582850 in SNESSolve_NEWTONLS (snes=0x555556065610)<br> at /home/lindad/projects/moose/petsc/src/snes/impls/ls/ls.c:225<br>#19 0x00007fffe959c7ee in SNESSolve (snes=0x555556065610, b=0x0, x=<optimized out>)<br> at /home/lindad/projects/moose/petsc/src/snes/interface/snes.c:4809<br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 13, 2023 at 1:54 PM Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div><br></div> It would be useful to see the convergences inside the linear solve so perhaps start with <div><br></div><div>-ksp_monitor_true_residual </div><div><br></div><div>-fieldsplit_u_ksp_type richardson (this is to allow the monitor below to work)</div><div>-fieldsplit_u_ksp_max_its 1 </div><div>-fieldsplit_u_ksp_monitor</div><div><br></div><div>Perhaps others, Matt/Jed/Pierre/Stefano likely know better off the cuff than me.</div><div><br><div>We should have a convenience option like -pc_fieldsplit_schur_monitor similar to the -pc_fieldsplit_gkb_monitor</div><div><br></div><div><br></div><div><br><blockquote type="cite"><div>On Apr 13, 2023, at 4:33 PM, Alexander Lindsay <<a href="mailto:alexlindsay239@gmail.com" target="_blank">alexlindsay239@gmail.com</a>> wrote:</div><br><div><div dir="ltr">Hi, I'm trying to solve steady Navier-Stokes for different Reynolds numbers. My options table<div><br></div><div>-dm_moose_fieldsplit_names u,p<br>-dm_moose_nfieldsplits 2<br>-fieldsplit_p_dm_moose_vars pressure<br>-fieldsplit_p_ksp_type preonly<br>-fieldsplit_p_pc_type jacobi<br>-fieldsplit_u_dm_moose_vars vel_x,vel_y<br>-fieldsplit_u_ksp_type preonly<br>-fieldsplit_u_pc_hypre_type boomeramg<br>-fieldsplit_u_pc_type hypre<br>-pc_fieldsplit_schur_fact_type full<br>-pc_fieldsplit_schur_precondition selfp<br>-pc_fieldsplit_type schur<br>-pc_type fieldsplit<br></div><div><br></div><div>works wonderfully for a low Reynolds number of 2.2. The solver performance crushes LU as I scale up the problem. However, not surprisingly this options table struggles when I bump the Reynolds number to 220. I've read that use of AIR (approximate ideal restriction) can improve performance for advection dominated problems. I've tried setting -pc_hypre_boomeramg_restriction_type 1 for a simple diffusion problem and the option works fine. However, when applying it to my field-split preconditioned Navier-Stokes system, I get immediate non-convergence:</div><div><br></div><blockquote style="margin:0px 0px 0px 40px;border:none;padding:0px"><div> 0 Nonlinear |R| = 1.033077e+03</div><div> 0 Linear |R| = 1.033077e+03</div><div> Linear solve did not converge due to DIVERGED_NANORINF iterations 0</div><div>Nonlinear solve did not converge due to DIVERGED_LINEAR_SOLVE iterations 0</div><div><br></div></blockquote>Does anyone have an idea as to why this might be happening? If not, I'd take a suggestion on where to set a breakpoint to start my own investigation. Alternatively, I welcome other preconditioning suggestions for an advection dominated problem.<div><br></div><div>Alex</div></div>
</div></blockquote></div><br></div></div></blockquote></div>
</blockquote></div>
</div></blockquote></body></html>