[petsc-users] Some hypre settings for 3D fully coupled incompressible NS
Edoardo alinovi
edoardo.alinovi at gmail.com
Fri Nov 8 11:45:55 CST 2024
Hello petsc friends,
It's been a while since I am trying to find a good setup for my coupled
solver.
Recently, I have run a scan with Dakota (more than 1k simulations) on the
Windsor body case with 7Mln cells on 36 cores on my small home server (Dell
R730 with 2x2496 v4 xeon). I thought it was a good idea to share my results
with the community!
Here is a resume of my finding:
1) Multiplicative is faster than Schur: I have found out that Schur
preconditioner is rarely faster than multiplicative despite the fact Schur
keeps the number of iterations lower. I think there is a lot of room for
improvement as far as FV matrices are concerned. Probably custom Shat is
the way to go, but not easy to find a good one! Up to now "selfp" looks to
be the only good and "ready to go" choice.
2) Vanilla fbcgs is faster than vanilla fgmres: maybe here we can tune
gmres restart, I have not tried this systematically.
3) Stick with preonly: using bcgs/cg as preconditioner ksp lowers the
number of iterations but it adds up a lot of overhead (even setting few
iterations or mild tolerances).
4) Staging is a good idea: beyond bare iteration performance, I think that
for steady state problems it worth setting a max for outer iterations in
fieldsplit, as starting iterations would cost you a lot and probably you
will be far from convergence anyway at the stage, so it is not a good
investment pushing hard on them.
5) Here my best so far settings:
# Outer solver settings
"solver": "fbcgs",
"preconditioner": "fieldsplit",
"absTol": 1e-6,
"relTol": 0.01,
# Field split KSP and PC
"fieldsplit_u_pc_type": "bjacobi",
"fieldsplit_p_pc_type": "hypre",
"fieldsplit_u_ksp_type": "preonly",
"fieldsplit_p_ksp_type": "preonly",
! HYPRE PC options
"fieldsplit_p_pc_hypre_boomeramg_strong_threshold": 0.05,
"fieldsplit_p_pc_hypre_boomeramg_coarsen_type": "PMIS",
"fieldsplit_p_pc_hypre_boomeramg_truncfactor": 0.3,
"fieldsplit_p_pc_hypre_boomeramg_no_cf": 0,
"fieldsplit_p_pc_hypre_boomeramg_agg_nl": 1,
"fieldsplit_p_pc_hypre_boomeramg_agg_num_paths": 1,
"fieldsplit_p_pc_hypre_boomeramg_P_max": 0,
"fieldsplit_p_pc_hypre_boomeramg_max_levels": 30,
"fieldsplit_p_pc_hypre_boomeramg_relax_type_all":
"backward-SOR/Jacobi",
"fieldsplit_p_pc_hypre_boomeramg_interp_type": "ext+i",
"fieldsplit_p_pc_hypre_boomeramg_grid_sweeps_down": 0,
"fieldsplit_p_pc_hypre_boomeramg_grid_sweeps_up": 2,
"fieldsplit_p_pc_hypre_boomeramg_cycle_type": "v"
I have a question for Barry/Jed/Matt. I have noted that most of the
commercial solvers use what I define as "SAMG with ILU smoother". I am
wondering if there's a way to reproduce this in Petsc. I have tried
PCPATCH to test VANKA, but I am not really able to use that PC as I am not
using DMplex. With this recipe I am not miles away from Fluent on the same
problem. Yet, I am wondering why commercial solvers do not use fieldsplit.
Hope this can be helpful and of course I am happy to collaborate on this
topic if someone outhere is willing to!
Cheers,
Edoardo
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20241108/b7689f99/attachment.html>
More information about the petsc-users
mailing list