[petsc-users] Euclid or Boomeramg vs ILU: questions.

Ed Bueler elbueler at alaska.edu
Fri Aug 20 14:11:36 CDT 2021


Viktor --

As a basic comment, note that ILU can be used in parallel, namely on each
processor block, by either non-overlapping domain decomposition:

-pc_type bjacobi -sub_pc_type ilu

or with overlap:

-pc_type asm -sub_pc_type ilu

See the discussion of block Jacobi and ASM at

https://petsc.org/release/docs/manual/ksp/#block-jacobi-and-overlapping-additive-schwarz-preconditioners

Of course, no application of ILU will be generating optimal performance,
but it looks like you are not yet getting that from AMG either.

Ed


On Fri, Aug 20, 2021 at 8:53 AM ????????? ?????? <numbersixvs at gmail.com>
wrote:

> *Hello, dear PETSc team!*
>
>
>
> I have a 3D elasticity with heterogeneous properties problem. There is
> unstructured grid with aspect ratio varied from 4 to 25. Dirichlet BCs
> (bottom zero displacements) are imposed via linear constraint equations
> using Lagrange multipliers. Also, Neumann (traction) BCs are imposed on
> side edges of mesh. Gravity load is also accounted for.
>
> I can solve this problem with *dgmres solver* and *ILU* as a
> *preconditioner*. But ILU doesn`t support parallel computing, so I
> decided to use Euclid or Boomeramg as a preconditioner. The issue is in
> slow convergence and high memory consumption, much higher, than for ILU.
>
> E.g., for source matrix size 2.14 GB with *ILU-0 preconditioning* memory
> consumption is about 5.9 GB, and the process converges due to 767
> iterations, and with *Euclid-0 preconditioning* memory consumption is
> about 8.7 GB, and the process converges due to 1732 iterations.
>
> One of the following preconditioners is currently in use: *ILU-0, ILU-1,
> Hypre (Euclid), Hypre (boomeramg)*.
>
> As a result of computations *(logs and memory logs are attached)*, the
> following is established for preconditioners:
>
> 1. *ILU-0*: does not always provide convergence (or provides, but slow);
> uses an acceptable amount of RAM; does not support parallel computing.
>
> 2. *ILU-1*: stable; memory consumption is much higher than that of ILU-0;
> does not support parallel computing.
>
> 3. *Euclid*: provides very slow convergence, calculations are performed
> several times slower than for ILU-0; memory consumption greatly exceeds
> both ILU-0 and ILU-1; supports parallel computing. Also ?drop tolerance?
> doesn?t provide enough accuracy in some cells, so I don?t use it.
>
> 4. *Boomeramg*: provides very slow convergence, calculations are
> performed several times slower than for ILU-0; memory consumption greatly
> exceeds both ILU-0 and ILU-1; supports parallel computing.
>
>
>
> In this regard, the following questions arose:
>
> 1. Is this behavior expected for HYPRE in computations with 1 MPI process?
> If not, is that problem can be related to *PETSc* or *HYPRE*?
>
> 2. Hypre (Euclid) has much fewer parameters than ILU. Among them is the
> factorization level *"-pc_hypre_euclid_level <now -2: formerly -2>:
> Factorization levels (None)"* and its default value looks very strange,
> moreover, it doesn?t matter what factor is chosen -2, -1 or 0. Could it be
> that the parameter is confused with Column pivot tolerance in ILU -
*"-pc_factor_column_pivot
> <-2.: -2.>: Column pivot tolerance (used only for some factorization)
> (PCFactorSetColumnPivot)"*?
>
> 3. What preconditioner would you recommend to: optimize *convergence*,
> *memory* consumption, add *parallel computing*?
>
> 4. How can we theoretically estimate memory costs with *ILU, Euclid,
> Boomeramg*?
>
> 5. At what stage are memory leaks most likely?
>
>
>
> In any case, thank you so much for your attention! Will be grateful for
> any response.
>
> Kind regards,
> Viktor Nazdrachev
> R&D senior researcher
> Geosteering Technologies LLC

-- 
Ed Bueler
Dept of Mathematics and Statistics
University of Alaska Fairbanks
Fairbanks, AK 99775-6660
306C Chapman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210820/59be0c1b/attachment-0001.html>


More information about the petsc-users mailing list