<div dir="ltr"><div>Hi Jeremy,</div><div><br></div><div>We did make some changes for performance reasons that we could not avoid, but I have never seen anything like this, so let's dig into it.</div><div><br></div><div>0) what is your test problem? eg, 3D Lapacian with Q1 finite elements.</div><div><br></div><div>First, you can get GAMG diagnostics by running with '-info :pc' and grep on GAMG.</div><div><br></div><div>Second, you are going to want to look at <b>iteration count</b> and <b>solve times, </b>and<b> </b>you want to separate the solve time (KSPSolve) and the GAMG setup time. </div><div>If you have your own timer dig into -log_view data and get the "KSPSolve" time (solve time) and "RAP" or "P'AP" for the setup time.</div><div>You could run one warm up solve and time a second one separately. That is what I do.</div><div><div><br></div></div><div><u>Iteration count:</u></div><div>You want to look at the eigen estimates for chebyshev. </div><div>If you have an SPD problem then you want to use CG and not the default GMRES.</div><div>If the eigen estimates are low GAMG convergence can suffer, but this is ussually catastrphic.</div><div><i>If your interation counts increase dramatically then this could be the issue.</i></div><div><br></div><div><u>Time / iteration and setup time:</u></div><div>You can also see the grid sizes and number of nnz/row (ave). This will effect time/iteration and setip time</div><div><br></div><div>3.17) In looking at the change logs for 3.17 (<a href="https://petsc.org/main/changes/317/#:~:text=maximum%20of%20ten-,PCMG,-%3A">https://petsc.org/main/changes/317/#:~:text=maximum%20of%20ten-,PCMG,-%3A</a>) we made few changes:</div><div>* moved default smoothing to Jacobi from SOR because Jacobi works on GPUs</div><div>* Some eigen estimate changes that you should look at. You should add the MatOptions if your matrix is SPD especially.</div><div><br></div><div>SOR ussually converges faster, but is slower per interation.</div><div><i>Maybe Jacobi runs a lot faster for you.</i></div><div><br></div><div>+ Check iteration counts<br></div><div>+ Check the eigen estimates did not change. If they did then we can dig into that<br></div><div><br></div><div>3.18) big chagnes: <a href="https://petsc.org/main/changes/318/#:~:text=based%20aggregation%20algorithm-,PC,-%3A">https://petsc.org/main/changes/318/#:~:text=based%20aggregation%20algorithm-,PC,-%3A</a></div>* Some small things but <i>the <b>-pc_gamg_sym_graph </b>bullet might be (one of) your problem(s)</i>. Related to the MatOptions bullet above<div>* The "aggressive" coarsening stratagy (use to be called "square_graph" but the old syntax is supported) is different because the old was was very slow.</div><div> I have noticed that the rate of coarsening changes a little with the new method, but not much.</div><div><i> But the way threshould works with the new method is a bit different so that could explain some of this.</i></div><div> (new method calls MIS twice; old method call MIS on A'A) </div><div><br></div><div>** There are two things that you want to check:</div><div>1) Eigen estimates. If Eigen estimates are too small interation counts can increase a lot or ussually the solver just fails.</div><div><b> See if there are any changes in the eigen estimates for chebyshev</b></div><div>2) Rate of coarening, which effects the number of NNZ per row. If that is too slow, NNZ goes up and the coarse grid construction (RAP) cost go way up.</div><div>Check that the coarse grid sizes, which is related to NNZ per row, do not change. I think they do and we can dig into into it.</div><div><b>A quick proxy for (2) is the "grid complexity" output. This should be around 1.01 to 1.2</b></div><div><br></div><div>Anyway, sorry for the changes. </div><div>I hate changing GAMG for this reason and I hate AMG for this reason!</div><div><br></div><div>Thanks,</div><div>Mark</div><div><br></div><div><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Apr 13, 2023 at 8:17 AM Jeremy Theler <<a href="mailto:jeremy@seamplex.com">jeremy@seamplex.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">When using GAMG+cg for linear elasticity and providing the near<br>
nullspace computed by MatNullSpaceCreateRigidBody(), I used to find<br>
"experimentally" that a small value of -pc_gamg_threshold in the order<br>
of 0.0001 would slightly decrease the solve time.<br>
<br>
Starting with 3.18, I started seeing that any positive value for the<br>
treshold would increase the solve time. I did a quick parametric<br>
(serial) run solving an elastic problem with a matrix size of approx<br>
570k x 570k for different values of GAMG threshold and different PETSc<br>
versions (compiled with the same compiler, options and flags).<br>
<br>
I noted that<br>
<br>
1. starting from 3.18, a threshold of 0.0001 that used to improve the<br>
speed now worsens it. <br>
2. PETSc 3.17 looks like a "sweet spot" of speed<br>
<br>
I would like to hear any comments you might have.<br>
<br>
The wall time shown includes the time needed to read the mesh and<br>
assemble the stiffness matrix. It is a refined version of the NAFEMS<br>
LE10 benchmark described here:<br>
<a href="https://seamplex.com/feenox/examples/mechanical.html#nafems-le10-thick-plate-pressure-benchmark" rel="noreferrer" target="_blank">https://seamplex.com/feenox/examples/mechanical.html#nafems-le10-thick-plate-pressure-benchmark</a><br>
<br>
If you want, I could dump the matrix, rhs and near nullspace vectors<br>
and share them.<br>
<br>
--<br>
jeremy theler<br>
<br>
</blockquote></div></div>