[petsc-users] Some hypre settings for 3D fully coupled incompressible NS
Jed Brown
jed at jedbrown.org
Tue Nov 12 22:26:14 CST 2024
To answer your question, I think the commercial solvers are not using "SAMG with ILU smoother" on the velocity-pressure coupled system, but a splitting technique or Schur-like reduction with that applied to the pressure system and AMG with ILU smoothers (or straight ILU) applied to the momentum system. ILU smoothing is often chosen when there is strong anisotropy (usually from boundary layers) not captured in the coarsening or the transport-dominated part prevents effective smoothing using more typical smoothers. ILU theory and smoothing properties are not very nice, but it is still often pragmatic.
See -pc_hypre_boomeramg_smooth_type ilu (which activates more sub-options) if you want to try that out while sticking with hypre.
It's worth checking whether making the `u` solver stronger has any significant impact on convergence, and distinguishing the accuracy impact of using multiplicative/selfp (even with an accurate solve of that subsystem) separately from the approximation incurred by using `preonly` (which you'll almost always want to do). You may have already sorted this out in your empirical study.
Edoardo alinovi <edoardo.alinovi at gmail.com> writes:
> Hello petsc friends,
>
> It's been a while since I am trying to find a good setup for my coupled
> solver.
>
> Recently, I have run a scan with Dakota (more than 1k simulations) on the
> Windsor body case with 7Mln cells on 36 cores on my small home server (Dell
> R730 with 2x2496 v4 xeon). I thought it was a good idea to share my results
> with the community!
>
> Here is a resume of my finding:
>
> 1) Multiplicative is faster than Schur: I have found out that Schur
> preconditioner is rarely faster than multiplicative despite the fact Schur
> keeps the number of iterations lower. I think there is a lot of room for
> improvement as far as FV matrices are concerned. Probably custom Shat is
> the way to go, but not easy to find a good one! Up to now "selfp" looks to
> be the only good and "ready to go" choice.
>
> 2) Vanilla fbcgs is faster than vanilla fgmres: maybe here we can tune
> gmres restart, I have not tried this systematically.
>
> 3) Stick with preonly: using bcgs/cg as preconditioner ksp lowers the
> number of iterations but it adds up a lot of overhead (even setting few
> iterations or mild tolerances).
>
> 4) Staging is a good idea: beyond bare iteration performance, I think that
> for steady state problems it worth setting a max for outer iterations in
> fieldsplit, as starting iterations would cost you a lot and probably you
> will be far from convergence anyway at the stage, so it is not a good
> investment pushing hard on them.
>
> 5) Here my best so far settings:
>
> # Outer solver settings
> "solver": "fbcgs",
> "preconditioner": "fieldsplit",
> "absTol": 1e-6,
> "relTol": 0.01,
>
> # Field split KSP and PC
> "fieldsplit_u_pc_type": "bjacobi",
> "fieldsplit_p_pc_type": "hypre",
> "fieldsplit_u_ksp_type": "preonly",
> "fieldsplit_p_ksp_type": "preonly",
>
> ! HYPRE PC options
> "fieldsplit_p_pc_hypre_boomeramg_strong_threshold": 0.05,
> "fieldsplit_p_pc_hypre_boomeramg_coarsen_type": "PMIS",
> "fieldsplit_p_pc_hypre_boomeramg_truncfactor": 0.3,
> "fieldsplit_p_pc_hypre_boomeramg_no_cf": 0,
> "fieldsplit_p_pc_hypre_boomeramg_agg_nl": 1,
> "fieldsplit_p_pc_hypre_boomeramg_agg_num_paths": 1,
> "fieldsplit_p_pc_hypre_boomeramg_P_max": 0,
> "fieldsplit_p_pc_hypre_boomeramg_max_levels": 30,
> "fieldsplit_p_pc_hypre_boomeramg_relax_type_all":
> "backward-SOR/Jacobi",
> "fieldsplit_p_pc_hypre_boomeramg_interp_type": "ext+i",
> "fieldsplit_p_pc_hypre_boomeramg_grid_sweeps_down": 0,
> "fieldsplit_p_pc_hypre_boomeramg_grid_sweeps_up": 2,
> "fieldsplit_p_pc_hypre_boomeramg_cycle_type": "v"
>
> I have a question for Barry/Jed/Matt. I have noted that most of the
> commercial solvers use what I define as "SAMG with ILU smoother". I am
> wondering if there's a way to reproduce this in Petsc. I have tried
> PCPATCH to test VANKA, but I am not really able to use that PC as I am not
> using DMplex. With this recipe I am not miles away from Fluent on the same
> problem. Yet, I am wondering why commercial solvers do not use fieldsplit.
>
> Hope this can be helpful and of course I am happy to collaborate on this
> topic if someone outhere is willing to!
>
> Cheers,
>
> Edoardo
More information about the petsc-users
mailing list