[petsc-users] Expected weak scaling behaviour for AMG libraries?
Matthew Knepley
knepley at gmail.com
Wed Oct 30 22:02:14 CDT 2024
On Wed, Oct 30, 2024 at 4:13 PM Khurana, Parv <p.khurana22 at imperial.ac.uk>
wrote:
> Hello PETSc Community,
> I am trying to understand the scaling behaviour of AMG methods in PETSc
> (Hypre for now) and how many DOFs/Rank are needed for a performant AMG
> solve.
> I’m currently conducting weak scaling tests using
> src/snes/tutorials/ex12.c in 3D, applying Dirichlet BCs with FEM at P=1.
> The tests keep DOFs per processor constant while increasing the mesh size
> and processor count, specifically:
>
> - *20000 and 80000 DOF/RANK* configurations.
> - Running SNES twice, using GMRES with a tolerance of 1e-5 and
> preconditioning with Hypre-BoomerAMG.
>
> A couple of quick points in order to make sure that there is no confusion:
1) Partitioner type "simple" is for the CI. It is a very bad partition, and
should not be used for timing. The default is ParMetis which should be good
enough.
2) You start out with 6^3 = 216 elements, distribute that, and then refine
it. This will be _really_ bad load balance on all arrangement except the
divisors of 216. You usually want to start out with something bigger at the
later stages. You can use -dm_refine_pre to refine before distribution.
3) It is not clear you are using the timing for just the solver
(SNESSolve). It could be that extraneous things are taking time. When
asking questions like this, please always send the output of -log_view for
timing, and at least -ksp_monitor_true_residial for convergence.
4) SNES ex56 is the example we use for GAMG scalability testing
Thanks,
Matt
> Unfortunately, parallel efficiency degrades noticeably with increased
> processor counts. Are there any insights or rules of thumb for using AMG
> more effectively? I have been looking at this issue for a while
> now and would love to engage in a further discussion. Please find below the
> weak scaling results and the options I use to run the tests.
> *#Run type*
> -run_type full
> -petscpartitioner_type simple
>
> *#Mesh settings*
> -dm_plex_dim 3
> -dm_plex_simplex 1
> -dm_refine 5 #Varied this
> -dm_plex_box_faces 6,6,6
>
> *#BCs and FEM space*
> -bc_type dirichlet
> -petscspace_degree 1
>
> *#Solver settings*
> -snes_max_it 2
> -ksp_type gmres
> -ksp_rtol 1.0e-5
> #Same settings as what we use for LOR
> -pc_type hypre
> -pc_hypre_type boomeramg
> -pc_hypre_boomeramg_coarsen_type hmis
> -pc_hypre_boomeramg_relax_type_all symmetric-sor/jacobi
> -pc_hypre_boomeramg_strong_threshold 0.7
> -pc_hypre_boomeramg_interp_type ext+i
> -pc_hypre_boomeramg_P_max 2
> -pc_hypre_boomeramg_truncfactor 0.3
>
> Best,
> Parv
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://urldefense.us/v3/__https://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YXR-8qKioRYS0fNOHacYGkm6WaIuKge2zoTiW1n0vLsWQUBiyLM48cg58pRLtNm0QjVigIZYftn2x-1bo13Z$ <https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!YXR-8qKioRYS0fNOHacYGkm6WaIuKge2zoTiW1n0vLsWQUBiyLM48cg58pRLtNm0QjVigIZYftn2x09fmjiN$ >
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20241030/4ee72cd1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 119488 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20241030/4ee72cd1/attachment-0001.png>
More information about the petsc-users
mailing list