[petsc-users] Euclid or Boomeramg vs ILU: questions.

Mark Adams mfadams at lbl.gov
Fri Aug 20 13:21:29 CDT 2021


Constraints are a pain with scalable/iterative solvers. If you order the
constraints last then ILU should work as well as it can work, but AMG gets
confused by the constraint equations.
You could look at PETSc's Stokes solvers, but it would be best if you could
remove the constrained equations from your system if they are just simple
point wise BC's.
Mark

On Fri, Aug 20, 2021 at 8:53 AM Наздрачёв Виктор <numbersixvs at gmail.com>
wrote:

> *Hello, dear PETSc team!*
>
>
>
> I have a 3D elasticity with heterogeneous properties problem. There is
> unstructured grid with aspect ratio varied from 4 to 25. Dirichlet BCs
> (bottom zero displacements) are imposed via linear constraint equations
> using Lagrange multipliers. Also, Neumann (traction) BCs are imposed on
> side edges of mesh. Gravity load is also accounted for.
>
> I can solve this problem with *dgmres solver* and *ILU* as a
> *preconditioner*. But ILU doesn`t support parallel computing, so I
> decided to use Euclid or Boomeramg as a preconditioner. The issue is in
> slow convergence and high memory consumption, much higher, than for ILU.
>
> E.g., for source matrix size 2.14 GB with *ILU-0 preconditioning* memory
> consumption is about 5.9 GB, and the process converges due to 767
> iterations, and with *Euclid-0 preconditioning* memory consumption is
> about 8.7 GB, and the process converges due to 1732 iterations.
>
> One of the following preconditioners is currently in use: *ILU-0, ILU-1,
> Hypre (Euclid), Hypre (boomeramg)*.
>
> As a result of computations *(logs and memory logs are attached)*, the
> following is established for preconditioners:
>
> 1. *ILU-0*: does not always provide convergence (or provides, but slow);
> uses an acceptable amount of RAM; does not support parallel computing.
>
> 2. *ILU-1*: stable; memory consumption is much higher than that of ILU-0;
> does not support parallel computing.
>
> 3. *Euclid*: provides very slow convergence, calculations are performed
> several times slower than for ILU-0; memory consumption greatly exceeds
> both ILU-0 and ILU-1; supports parallel computing. Also “drop tolerance”
> doesn’t provide enough accuracy in some cells, so I don’t use it.
>
> 4. *Boomeramg*: provides very slow convergence, calculations are
> performed several times slower than for ILU-0; memory consumption greatly
> exceeds both ILU-0 and ILU-1; supports parallel computing.
>
>
>
> In this regard, the following questions arose:
>
> 1. Is this behavior expected for HYPRE in computations with 1 MPI process?
> If not, is that problem can be related to *PETSc* or *HYPRE*?
>
> 2. Hypre (Euclid) has much fewer parameters than ILU. Among them is the
> factorization level *"-pc_hypre_euclid_level <now -2: formerly -2>:
> Factorization levels (None)"* and its default value looks very strange,
> moreover, it doesn’t matter what factor is chosen -2, -1 or 0. Could it be
> that the parameter is confused with Column pivot tolerance in ILU - *"-pc_factor_column_pivot
> <-2.: -2.>: Column pivot tolerance (used only for some factorization)
> (PCFactorSetColumnPivot)"*?
>
> 3. What preconditioner would you recommend to: optimize *convergence*,
> *memory* consumption, add *parallel computing*?
>
> 4. How can we theoretically estimate memory costs with *ILU, Euclid,
> Boomeramg*?
>
> 5. At what stage are memory leaks most likely?
>
>
>
> In any case, thank you so much for your attention! Will be grateful for
> any response.
>
> Kind regards,
> Viktor Nazdrachev
> R&D senior researcher
> Geosteering Technologies LLC
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210820/3133ec7b/attachment.html>


More information about the petsc-users mailing list