[petsc-dev] Bad scaling of GAMG in FieldSplit

Mark Adams mfadams at lbl.gov
Fri Jul 27 07:56:22 CDT 2018


This is a complex shifted (in the parabolic, definite Helmholtz, I assume).
GAMG's default parameters can do strange things on a mass matrix (the limit
of this shift, except for the complex part).

Please run with -info and send me the output (big) or just grep on GAMG
(small). This will give details of the grid meta-data.

GAMG should function for complex but the defaults are probably not good.
GAMG (and ML) use eigen estimates, which are not great with complex
problems. You can avoid one eigen estimate with:

-st_fieldsplit_pressure_sub_1_ksp_mg_levels_ksp_type richardson
-st_fieldsplit_pressure_sub_1_ksp_mg_levels_pc_type sor

And use -option_left to make sure these crazy parameter syntax is correct.

You can get rid of the other eigen estimate and get a simple, robust but
non-optimal solver with

-st_fieldsplit_pressure_sub_1_ksp_gamg_nsmooths 0

This is plane aggregation.


On Thu, Jul 26, 2018 at 9:39 AM Pierre Jolivet <pierre.jolivet at enseeiht.fr>
wrote:

> Hello,
> I’m using GAMG on a shifted Laplacian with these options:
> -st_fieldsplit_pressure_ksp_type preonly
> -st_fieldsplit_pressure_pc_composite_type additive
> -st_fieldsplit_pressure_pc_type composite
> -st_fieldsplit_pressure_sub_0_ksp_pc_type jacobi
> -st_fieldsplit_pressure_sub_0_pc_type ksp
> -st_fieldsplit_pressure_sub_1_ksp_pc_gamg_square_graph 10
> -st_fieldsplit_pressure_sub_1_ksp_pc_type gamg
> -st_fieldsplit_pressure_sub_1_pc_type ksp
>
> and I end up with the following logs on 512 (top) and 2048 (bottom)
> processes:
> MatMult          1577790 1.0 3.1967e+03 1.2 4.48e+12 1.6 7.6e+09 5.6e+03
> 0.0e+00  7 71 75 63  0   7 71 75 63  0 650501
> MatMultAdd        204786 1.0 1.3412e+02 5.5 1.50e+10 1.7 5.5e+08 2.7e+02
> 0.0e+00  0  0  5  0  0   0  0  5  0  0 50762
> MatMultTranspose  204786 1.0 4.6790e+01 4.3 1.50e+10 1.7 5.5e+08 2.7e+02
> 0.0e+00  0  0  5  0  0   0  0  5  0  0 145505
> [..]
> KSPSolve_FS_3       7286 1.0 7.5506e+02 1.0 9.14e+11 1.8 7.3e+09 1.5e+03
> 2.6e+05  2 14 71 16 34   2 14 71 16 34 539009
>
> MatMult          1778795 1.0 3.5511e+03 4.1 1.46e+12 1.9 4.0e+10 2.4e+03
> 0.0e+00  7 66 75 61  0   7 66 75 61  0 728371
> MatMultAdd        222360 1.0 2.5904e+0348.0 4.31e+09 1.9 2.4e+09 1.3e+02
> 0.0e+00 14  0  4  0  0  14  0  4  0  0  2872
> MatMultTranspose  222360 1.0 1.8736e+03421.8 4.31e+09 1.9 2.4e+09 1.3e+02
> 0.0e+00  0  0  4  0  0   0  0  4  0  0  3970
> [..]
> KSPSolve_FS_3       7412 1.0 2.8939e+03 1.0 2.66e+11 2.1 3.5e+10 6.1e+02
> 2.7e+05 17 11 67 14 28  17 11 67 14 28 148175
>
> MatMultAdd and MatMultTranspose (performed by GAMG) somehow ruin the
> scalability of the overall solver. The pressure space “only” has 3M
> unknowns so I’m guessing that’s why GAMG is having a hard time strong
> scaling. For the other fields, the matrix is somehow distributed nicely,
> i.e., I don’t want to change the overall distribution of the matrix.
> Do you have any suggestion to improve the performance of GAMG in that
> scenario? I had two ideas in mind but please correct me if I’m wrong or if
> this is not doable:
> 1) before setting up GAMG, first use a PCTELESCOPE to avoid having too
> many processes work on this small problem
> 2) have the sub_0_ and the sub_1_ work on two different nonoverlapping
> communicators of size PETSC_COMM_WORLD/2, do the solve concurrently, and
> then sum the solutions (only worth doing because of -pc_composite_type
> additive). I have no idea if this easily doable with PETSc command line
> arguments
>
> Thanks in advance for your guidance,
> Pierre
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20180727/5be915a8/attachment.html>


More information about the petsc-dev mailing list