[petsc-users] gamg student questions

Mark Adams mfadams at lbl.gov
Sun Oct 17 08:04:33 CDT 2021


Hi Daniel, [this is a PETSc users list question so let me move it there]

The behavior that you are seeing is a bit odd but not surprising.

First, you should start with simple problems and get AMG (you might want to
try this exercise with hypre as well: --download-hypre and use -pc_type
hypre, or BDDC, see below).

There are, alas, a lot of tuning parameters in AMG/DD and I recommend a
homotopy process: you can start with issues that deal with your
discretization on a simple cube, linear elasticity, cube elements, modest
Posson ratio, etc., and first get "textbook multigrid efficiency" (TME),
which for elasticity and a V(2,2) cycle in GAMG is about one digit of error
reduction per iteration and perfectly monotonic until it hits floating
point precision.

I would set this problem up and I would hope it runs OK, but the
problems that you want to do are probably pretty hard (high order FE,
plasticity, incompressibility) so there will be more work to do.

That said, PETSc has nice domain decomposition solvers that are more
optimized and maintained for elasticity. Now that I think about it, you
should probably look at these (
https://petsc.org/release/docs/manualpages/PC/PCBDDC.html
https://petsc.org/release/docs/manual/ksp/#balancing-domain-decomposition-by-constraints).
I think they prefer, but do not require, that you do not assemble your
element matrices, but let them do it. The docs will make that clear.

BSSC is great but it is not magic, and it is no less complex, so I would
still recommend the same process of getting TME and then moving to the
problems that you want to solve.

Good luck,
Mark



On Sat, Oct 16, 2021 at 10:50 PM Daniel N Pickard <pickard at mit.edu> wrote:

> Hi Dr Adams,
>
>
> I am using the gamg in petsc to solve some elasticity problems for
> modeling bones. I am new to profiling with petsc, but I am observing that
> around a thousand iterations my norm has gone down 3 orders of magnitude
> but the solver slows down and progress sort of stalls. The norm
> also doesn't decrease monotonically, but jumps around a bit. I also notice
> that if I request to only use 1 multigrid level, the preconditioner is
> much cheaper and not as powerful so the code takes more iterations, but
> runs 2-3x faster. Is this expected that large models require lots of
> iterations and convergence slows down as we get more accurate? What exactly
> should I be looking for when I am profiling to try to understand how to run
> faster? I see that a lot of my ratio's are 2.7, but I think that is because
> my mesh partitioner is not doing a great job making equal domains. What are
> the giveaways in the log_view that tell you that petsc could be optimized
> more?
>
>
> Also when I look at the solution with just 4 orders of magnitude of
> convergence I can see that the solver has not made much progress in the
> interior of the domain, but seems to have smoothed out the boundary where
> forces where applied very well. Does this mean I should use a larger
> threshold to get more course grids that can fix the low frequency error?
>
>
> Thanks,
>
> Daniel Pickard
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211017/682b2b05/attachment.html>


More information about the petsc-users mailing list