[petsc-users] PETSc 3.14 to PETSc 3.20: Different (slower) convergence for classical AMG (sequential and especially in parallel)

Mark Adams mfadams at lbl.gov
Thu Dec 14 10:50:59 CST 2023


On Thu, Dec 14, 2023 at 10:15 AM LEDAC Pierre <Pierre.LEDAC at cea.fr> wrote:

> Hello Mark,
>
>
> Thanks for your answer. Indeed, I didn't see the information that
> classical AMG was not really supported:
>
>
>  -solver2_pc_gamg_type <now classical : formerly agg>: Type of AMG method (only
> 'agg' supported and useful) (one of) classical geo agg (PCGAMGSetType)
>
> We switched very recently from GAMG("agg") to GAMG("classical") for a weak
> scaling test up to 32000 cores, where we saw very good scalability with GAMG("classical")
> compared to GAMG("agg"). But it was with PETSc 3.14...
>

AMG is sensitive to parameters.
What PDE and discretization are you solving?
For example, I recently optimized the Q2 Laplacian benchmark and found good
scaling with
-pc_gamg_threshold 0.04 -pc_gamg_threshold_scale .25
Hypre scaled well without optimization (see below),


>
> So today, we are going to upgrade to 3.20 and focus on GAMG("agg") or
> Hypre Classical AMG. We will see how it compares.
>

You might want to update to v3.20.2
That has some of my recent GAMG updates.


> May I ask you what is your point of view of the current state of the GPU
> versions of GAMG("agg") versus Hypre AMG Classical ?
>

Hypre is well supported with several developers over decades, whereas I
really just maintain GAMG + I add some things like anisotropy support
recently/currently.
But, I build on the PETSc sparse linear algebra that is well supported in
PETSc and hypre, and we have several good people doing that.

TL;DR
Both run the solve and matrix setup phase on the GPU.
Hypre puts the graph setup phase on the GPU, but this phase is 1) not well
suited to GPUs and 2) is amortized in most applications (just done once).
GAMG is easier to deal with because it is built-in and the interface to
hypre can be fragile with respect to GPUs (eg, if you use '-mat_type
hypre') in my experience.
If performance is critical and you have the time to put into it, hypre will
be a good option, and GAMG can be a backup.


>
> In fact, the reason of our move from 3.14 to 3.20 is to take advantage of
> all the progress in PETSc and Hypre on accelerated solvers/preconditioners
> during the last 2 years.
>
>
And I can give you advice on GAMG parameters, if you send me the output
with '-info :pc' (and 'grep GAMG').

Thanks,
Mark


> Greatly appreciate your help,
>
> Pierre LEDAC
> Commissariat à l’énergie atomique et aux énergies alternatives
> Centre de SACLAY
> DES/ISAS/DM2S/SGLS/LCAN
> Bâtiment 451 – point courrier n°43
> F-91191 Gif-sur-Yvette
> +33 1 69 08 04 03
> +33 6 83 42 05 79
> ------------------------------
> *De :* Mark Adams <mfadams at lbl.gov>
> *Envoyé :* mercredi 13 décembre 2023 20:54:17
> *À :* LEDAC Pierre
> *Cc :* petsc-users at mcs.anl.gov; BRUNETON Adrien
> *Objet :* Re: [petsc-users] PETSc 3.14 to PETSc 3.20: Different (slower)
> convergence for classical AMG (sequential and especially in parallel)
>
> Hi Pierre,
>
> Sorry I missed this post and your issues were brought to my attention
> today.
>
> First, the classic version is not supported well. The postdoc that wrote
> the code is long gone and I don't know the code at all.
> It is really a reference implementation that someone could build on and is
> not meant for production.
> In 10 years you are the first user that has connected us.
>
> The hypre package is a very good AMG solver and it uses classical AMG as
> the main solver.
> I wrote GAMG ("agg") which is a smoothed aggregation AMG solver and is
> very different from classical.
> I would suggest you move to hypre or '-pc_gamg_type agg'.
>
> The coarsening was developed in this time frame and there was a lot of
> churn as a new strategy for aggressive coarsening did not work well for
> some users and I had to add the old method in and then made it the default
> (again).
> This change missed v3.20, but you can get the old aggressive strategy with
> '-pc_gamg_aggressive_square_graph'.
> Check with -options_left to check that it is being used.
>
> As far as your output (nice formatting, thank you), the coarse grid is
> smaller in the new code.
>             rows=41, cols=41   |          rows=30, cols=30
> "square graph" should fix this.
>
> You can also try not using aggressive coarsening with:
> You could try '-pc_gamg_aggressive_coarsening 0'
>
> Let me know how it goes and let's try to get you into a more sustainable
> state ... I really try not to change this code but sometimes need to.
>
> Thanks,
> Mark
>
>
>
>
>
> On Mon, Oct 9, 2023 at 10:43 AM LEDAC Pierre <Pierre.LEDAC at cea.fr> wrote:
>
>> Hello all,
>>
>>
>> I am struggling to find the same convergence in iterations when using
>> classical algebric multigrid in my code with PETSc 3.20 compared to PETSc
>> 3.14.
>>
>>
>> I am using in order to solve a Poisson system:
>>
>> *-ksp_type cg -pc_type gamg -pc_gamg_type classical*
>>
>>
>> I read the different releases notes between 3.15 and 3.20:
>>
>> https://petsc.org/release/changes/317
>>
>> https://petsc.org/main/manualpages/PC/PCGAMGSetThreshold/
>>
>>
>> And have a look at the archive mailing list (especially this one:
>> https://www.mail-archive.com/petsc-users@mcs.anl.gov/msg46688.html)
>>
>> so I added some other options to try to have the same behaviour than
>> PETSc 3.14:
>>
>>
>> *-ksp_type cg -pc_type gamg -pc_gamg_type classical *-mg_levels_pc_type
>> sor -pc_gamg_threshold 0.
>>
>>
>> It improves the convergence but there still a different convergence
>> though (26 vs 18 iterations).
>>
>> On another of my test case, the number of levels is different (e.g. 6 vs
>> 4) also, and here it is the same, but with a different coarsening according
>> to the output from the -ksp_view option
>>
>> The main point is that the convergence dramatically degrades in parallel
>> on a third test case, so I can't upgrade to PETSc 3.20 for now unhappily.
>>
>> I send you the partial report (petsc_314_vs_petsc_320.ksp_view) with
>> -ksp_view (left PETSc 3.14, right PETSc 3.20) and the configure/command
>> line options used (in petsc_XXX_petsc.TU files).
>>
>>
>> Could my issue related to the following 3.18 change ? I have not tried
>> the first one.
>>
>>
>>    -
>>
>>    Remove PCGAMGSetSymGraph() and -pc_gamg_sym_graph. The user should
>>    now indicate symmetry and structural symmetry using MatSetOption
>>    <https://petsc.org/release/manualpages/Mat/MatSetOption/>() and GAMG
>>    will symmetrize the graph if a symmetric options is not set
>>    -
>>
>>    Change -pc_gamg_reuse_interpolation default from false to true.
>>
>>
>> Any advice would be greatly appreciated,
>>
>>
>> Pierre LEDAC
>> Commissariat à l’énergie atomique et aux énergies alternatives
>> Centre de SACLAY
>> DES/ISAS/DM2S/SGLS/LCAN
>> Bâtiment 451 – point courrier n°43
>> F-91191 Gif-sur-Yvette
>> +33 1 69 08 04 03
>> +33 6 83 42 05 79
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20231214/7a5894cb/attachment.html>


More information about the petsc-users mailing list