[petsc-users] Euclid or Boomeramg vs ILU: questions.

Наздрачёв Виктор numbersixvs at gmail.com
Fri Aug 20 03:02:22 CDT 2021


*Hello, dear PETSc team!*



I have a 3D elasticity with heterogeneous properties problem. There is
unstructured grid with aspect ratio varied from 4 to 25. Dirichlet BCs
(bottom zero displacements) are imposed via linear constraint equations
using Lagrange multipliers. Also, Neumann (traction) BCs are imposed on
side edges of mesh. Gravity load is also accounted for.

I can solve this problem with *dgmres solver* and *ILU* as a
*preconditioner*. But ILU doesn`t support parallel computing, so I decided
to use Euclid or Boomeramg as a preconditioner. The issue is in slow
convergence and high memory consumption, much higher, than for ILU.

E.g., for source matrix size 2.14 GB with *ILU-0 preconditioning* memory
consumption is about 5.9 GB, and the process converges due to 767
iterations, and with *Euclid-0 preconditioning* memory consumption is about
8.7 GB, and the process converges due to 1732 iterations.

One of the following preconditioners is currently in use: *ILU-0, ILU-1,
Hypre (Euclid), Hypre (boomeramg)*.

As a result of computations *(logs and memory logs are attached)*, the
following is established for preconditioners:

1. *ILU-0*: does not always provide convergence (or provides, but slow);
uses an acceptable amount of RAM; does not support parallel computing.

2. *ILU-1*: stable; memory consumption is much higher than that of ILU-0;
does not support parallel computing.

3. *Euclid*: provides very slow convergence, calculations are performed
several times slower than for ILU-0; memory consumption greatly exceeds
both ILU-0 and ILU-1; supports parallel computing. Also “drop tolerance”
doesn’t provide enough accuracy in some cells, so I don’t use it.

4. *Boomeramg*: provides very slow convergence, calculations are performed
several times slower than for ILU-0; memory consumption greatly exceeds
both ILU-0 and ILU-1; supports parallel computing.



In this regard, the following questions arose:

1. Is this behavior expected for HYPRE in computations with 1 MPI process?
If not, is that problem can be related to *PETSc* or *HYPRE*?

2. Hypre (Euclid) has much fewer parameters than ILU. Among them is the
factorization level *"-pc_hypre_euclid_level <now -2: formerly -2>:
Factorization levels (None)"* and its default value looks very strange,
moreover, it doesn’t matter what factor is chosen -2, -1 or 0. Could it be
that the parameter is confused with Column pivot tolerance in ILU -
*"-pc_factor_column_pivot
<-2.: -2.>: Column pivot tolerance (used only for some factorization)
(PCFactorSetColumnPivot)"*?

3. What preconditioner would you recommend to: optimize *convergence*,
*memory* consumption, add *parallel computing*?

4. How can we theoretically estimate memory costs with *ILU, Euclid,
Boomeramg*?

5. At what stage are memory leaks most likely?



In any case, thank you so much for your attention! Will be grateful for any
response.

Kind regards,
Viktor Nazdrachev
R&D senior researcher
Geosteering Technologies LLC
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210820/ebd5d65f/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: logs.rar
Type: application/octet-stream
Size: 90710 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210820/ebd5d65f/attachment-0001.obj>


More information about the petsc-users mailing list