[petsc-users] Effect of -pc_gamg_threshold vs PETSc version
Mark Adams
mfadams at lbl.gov
Sat Apr 15 07:05:10 CDT 2023
On Fri, Apr 14, 2023 at 8:54 AM Jeremy Theler <jeremy at seamplex.com> wrote:
> Hi Mark. So glad you answered.
>
> > 0) what is your test problem? eg, 3D Lapacian with Q1 finite
> > elements.
>
> I said in my first email it was linear elasticty (and I gave a link
> where you can see the geometry, BCs, etc.) but I did not specifty
> further details.
>
OK, this is fine. Nice 3D tet mesh with P2 (P1 will lock). I assume you
have a benign Poisson ratio.
> It is linear elasticity with displacement-based FEM formulation using
> unstructured curved 10-noded tetrahedra.
>
> The matrix is marked as SPD with MatSetOption() and the solver is
> indeed CG and not the default GMRES.
>
>
good
> > First, you can get GAMG diagnostics by running with '-info :pc' and
> > grep on GAMG.
>
> Great advice. Now I have a lot more of information but I'm not sure how
> to analyze it. Find attached for each combination of threshold and
> PETSc version the output of -info :pc -ksp_monitor -ksp_view
>
>
(new_py-env) 07:43 ~/Downloads/log 2$ grep "grid complexity" *
....
infopc-0.02-17.log:[0] PCSetUp_GAMG(): (null): 7 levels, grid complexity =
1.28914
infopc-0.02-17.log: Complexity: grid = 1.04361 operator =
1.28914
infopc-0.02-18.log: Complexity: grid = 1.05658 operator =
1.64555
infopc-0.02-19.log: Complexity: grid = 1.05658 operator =
1.64555
I was using "grid complexity " and changed to the accepted term "operator".
You can see that the new coarening method is pretty different on this
problem.
For an isiotropoic problem like this a zero threshould is a good place to
start, but you can use it to tune.
> In general it looks like 3.18 and 3.19 have less KSP iterations than
> 3.17 but the overall time is larger.
>
We need to see the solve times.
Run with -log_view and grep for KSPSolve
We can look at the setup time separately
In practice the setup time is amortized unless you use a full Newton
nonlinear solver.
Your iteration counts are reasonable. They go up a little with Jacobi.
Here are your grid sizes:
(new_py-env) 07:55 ~/Downloads/log 2$ grep N= infopc-0.0001-19.log
[0] <pc> PCSetUp_GAMG(): (null): level 0) N=568386, n data rows=3, n data
cols=6, nnz/row (ave)=82, np=1
[0] <pc> PCSetUp_GAMG(): (null): 1) N=22206, n data cols=6, nnz/row
(ave)=445, 1 active pes
[0] <pc> PCSetUp_GAMG(): (null): 2) N=2628, n data cols=6, nnz/row
(ave)=1082, 1 active pes
[0] <pc> PCSetUp_GAMG(): (null): 3) N=180, n data cols=6, nnz/row
(ave)=180, 1 active pes
[0] <pc> PCSetUp_GAMG(): (null): 4) N=18, n data cols=6, nnz/row (ave)=18,
1 active pes
[0] <pc> PCSetUp_GAMG(): (null): PCSetUp_GAMG: call
KSPChebyshevSetEigenvalues on level 3 (N=180) with emax = 2.0785 emin =
0.0468896
[0] <pc> PCSetUp_GAMG(): (null): PCSetUp_GAMG: call
KSPChebyshevSetEigenvalues on level 2 (N=2628) with emax = 1.78977 emin =
5.91431e-08
[0] <pc> PCSetUp_GAMG(): (null): PCSetUp_GAMG: call
KSPChebyshevSetEigenvalues on level 1 (N=22206) with emax = 2.12541 emin =
4.27581e-09
[0] <pc> PCSetUp_GAMG(): (null): PCSetUp_GAMG: call
KSPChebyshevSetEigenvalues on level 0 (N=568386) with emax = 3.42185 emin =
0.0914406
(new_py-env) 07:56 ~/Downloads/log 2$ grep N= infopc-0.0001-15.log
[0] PCSetUp_GAMG(): level 0) N=568386, n data rows=3, n data cols=6,
nnz/row (ave)=82, np=1
[0] PCSetUp_GAMG(): 1) N=15642, n data cols=6, nnz/row (ave)=368, 1 active
pes
[0] PCSetUp_GAMG(): 2) N=1266, n data cols=6, nnz/row (ave)=468, 1 active
pes
[0] PCSetUp_GAMG(): 3) N=108, n data cols=6, nnz/row (ave)=108, 1 active pes
[0] PCSetUp_GAMG(): 4) N=12, n data cols=6, nnz/row (ave)=12, 1 active pes
The new version coarsens slower.
BTW. Use something like -pc_gamg_coarse_eq_limit 1000
Your coarse grids are too small.
You can grep on MatLUFactor to check that the coarse grid solve/factor is
under control but 1000 in 3D is pretty conservative.
I am sure you are are going to want more aggresive coarsening (newer
versions): -pc_gamg_aggressive_coarsening <1>
Just try 10 (ie, all levels) to start.
Mark
> > Anyway, sorry for the changes.
> > I hate changing GAMG for this reason and I hate AMG for this reason!
>
> No need to apologize, I just want to better understand how to better
> exploit your code!
>
> Thanks
> --
> jeremy
>
> >
> > Thanks,
> > Mark
> >
> >
> >
> > On Thu, Apr 13, 2023 at 8:17 AM Jeremy Theler <jeremy at seamplex.com>
> > wrote:
> > > When using GAMG+cg for linear elasticity and providing the near
> > > nullspace computed by MatNullSpaceCreateRigidBody(), I used to find
> > > "experimentally" that a small value of -pc_gamg_threshold in the
> > > order
> > > of 0.0001 would slightly decrease the solve time.
> > >
> > > Starting with 3.18, I started seeing that any positive value for
> > > the
> > > treshold would increase the solve time. I did a quick parametric
> > > (serial) run solving an elastic problem with a matrix size of
> > > approx
> > > 570k x 570k for different values of GAMG threshold and different
> > > PETSc
> > > versions (compiled with the same compiler, options and flags).
> > >
> > > I noted that
> > >
> > > 1. starting from 3.18, a threshold of 0.0001 that used to improve
> > > the
> > > speed now worsens it.
> > > 2. PETSc 3.17 looks like a "sweet spot" of speed
> > >
> > > I would like to hear any comments you might have.
> > >
> > > The wall time shown includes the time needed to read the mesh and
> > > assemble the stiffness matrix. It is a refined version of the
> > > NAFEMS
> > > LE10 benchmark described here:
> > > https://seamplex.com/feenox/examples/mechanical.html#nafems-le10-
> > > thick-plate-pressure-benchmark
> > >
> > > If you want, I could dump the matrix, rhs and near nullspace
> > > vectors
> > > and share them.
> > >
> > > --
> > > jeremy theler
> > >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230415/b4df3a52/attachment-0001.html>
More information about the petsc-users
mailing list