[petsc-users] GAMG preconditioning

Milan Pelletier milan.pelletier at protonmail.com
Mon Apr 12 10:34:23 CDT 2021

Dear all,

I am currently trying to use PETSc with CG solver and GAMG preconditioner.
I have started with the following set of parameters:
-ksp_type cg
-pc_type gamg
-pc_gamg_agg_nsmooths 1
-pc_gamg_threshold 0.02
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type sor
-mg_levels_ksp_max_it 2

Unfortunately, the preconditioning seems to run extremely slowly. I tried to play around with the numbers, to check if I could notice some difference, but could not observe significant changes.
As a comparison, the KSPSetup call with GAMG PC takes more than 10 times longer than completing the whole computation (preconditioning + ~400 KSP iterations to convergence) of the similar case using the following parameters :
-ksp_type cg
-pc_type ilu
-pc_factor_levels 0

The matrix size for my case is ~1,850,000*1,850,000 elements, with ~38,000,000 non-zero terms (i.e. ~20 per row). For both ILU and AMG cases I use matseqaij/vecseq storage (as a first step I work with only 1 MPI process).

Is there something wrong in the parameter set I have been using?
I understand that the preconditioning overhead with AMG is higher than with ILU, but I would also expect CG/GAMG to be competitive against CG/ILU, especially considering the relatively big problem size.

For information, I am using the PETSc version built from commit 6840fe907c1a3d26068082d180636158471d79a2 (release branch from April 7, 2021).

Any clue or idea would be greatly appreciated!
Thanks for your help,

Best regards,
Milan Pelletier
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210412/11a208f9/attachment.html>

More information about the petsc-users mailing list