<div dir="ltr"><div dir="ltr">Hi Daniel, [this is a PETSc users list question so let me move it there]</div><div dir="ltr"><br></div><div dir="ltr">The behavior that you are seeing is a bit odd but not surprising.</div><div dir="ltr"><br></div><div dir="ltr">First, you should start with simple problems and get AMG (you might want to try this exercise with hypre as well: --download-hypre and use -pc_type hypre, or BDDC, see below).</div><div dir="ltr"><br></div><div dir="ltr">There are, alas, a lot of tuning parameters in AMG/DD and I recommend a homotopy process: you can start with issues that deal with your discretization on a simple cube, linear elasticity, cube elements, modest Posson ratio, etc., and first get "textbook multigrid efficiency" (TME), which for elasticity and a V(2,2) cycle in GAMG is about one digit of error reduction per iteration and perfectly monotonic until it hits floating point precision.</div><div dir="ltr"><br></div><div>I would set this problem up and I would hope it runs OK, but the problems that you want to do are probably pretty hard (high order FE, plasticity, incompressibility) so there will be more work to do.</div><div><br></div><div>That said, PETSc has nice domain decomposition solvers that are more optimized and maintained for elasticity. Now that I think about it, you should probably look at these (<a href="https://petsc.org/release/docs/manualpages/PC/PCBDDC.html">https://petsc.org/release/docs/manualpages/PC/PCBDDC.html</a> <a href="https://petsc.org/release/docs/manual/ksp/#balancing-domain-decomposition-by-constraints">https://petsc.org/release/docs/manual/ksp/#balancing-domain-decomposition-by-constraints</a>). I think they prefer, but do not require, that you do not assemble your element matrices, but let them do it. The docs will make that clear. </div><div><br></div><div>BSSC is great but it is not magic, and it is no less complex, so I would still recommend the same process of getting TME and then moving to the problems that you want to solve.</div><div><br></div><div>Good luck,</div><div>Mark</div><div dir="ltr"><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Oct 16, 2021 at 10:50 PM Daniel N Pickard <<a href="mailto:pickard@mit.edu">pickard@mit.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr" style="font-size:12pt;color:rgb(0,0,0);background-color:rgb(255,255,255);font-family:Calibri,Arial,Helvetica,sans-serif">
<p>Hi Dr Adams,</p>
<p><br>
</p>
<p>I am using the gamg in petsc to solve some elasticity problems for modeling bones. I am new to profiling with petsc, but I am observing that around a thousand iterations my norm has gone down 3 orders of magnitude but the solver slows down and progress sort
of stalls. The norm also doesn't decrease monotonically, but jumps around a bit. I also notice that if I request to only use 1 multigrid level, the preconditioner is much cheaper and not as powerful so the code takes more iterations, but runs 2-3x faster.
Is this expected that large models require lots of iterations and convergence slows down as we get more accurate? What exactly should I be looking for when I am profiling to try to understand how to run faster? I see that a lot of my ratio's are 2.7, but I
think that is because my mesh partitioner is not doing a great job making equal domains. What are the giveaways in the log_view that tell you that petsc could be optimized more?<br>
</p>
<p><br>
</p>
<p>Also when I look at the solution with just 4 orders of magnitude of convergence I can see that the solver has not made much progress in the interior of the domain, but seems to have smoothed out the boundary where forces where applied very well. Does this
mean I should use a larger threshold to get more course grids that can fix the low frequency error?
<br>
</p>
<p><br>
</p>
<p>Thanks,</p>
<p>Daniel Pickard<br>
</p>
</div>
</blockquote></div></div>