[petsc-users] Slow convergence while parallel computations.
Pierre Jolivet
pierre at joliv.et
Wed Sep 1 04:01:26 CDT 2021
Dear Viktor,
> On 1 Sep 2021, at 10:42 AM, Наздрачёв Виктор <numbersixvs at gmail.com> wrote:
>
> Dear all,
>
> I have a 3D elasticity problem with heterogeneous properties. There is unstructured grid with aspect ratio varied from 4 to 25. Zero Dirichlet BCs are imposed on bottom face of mesh. Also, Neumann (traction) BCs are imposed on side faces. Gravity load is also accounted for. The grid I use consists of 500k cells (which is approximately 1.6M of DOFs).
>
> The best performance and memory usage for single MPI process was obtained with HPDDM(BFBCG) solver
>
Block Krylov solvers are (most often) only useful if you have multiple right-hand sides, e.g., in the context of elasticity, multiple loadings.
Is that really the case? If not, you may as well stick to “standard” CG instead of the breakdown-free block (BFB) variant.
> and bjacobian + ICC (1) in subdomains as preconditioner, it took 1 m 45 s and RAM 5.0 GB. Parallel computation with 4 MPI processes took 2 m 46 s when using 5.6 GB of RAM. This because of number of iterations required to achieve the same tolerance is significantly increased.
>
> I`ve also tried PCGAMG (agg) preconditioner with ICС (1) sub-precondtioner. For single MPI process, the calculation took 10 min and 3.4 GB of RAM. To improve the convergence rate, the nullspace was attached using MatNullSpaceCreateRigidBody and MatSetNearNullSpace subroutines. This has reduced calculation time to 3 m 58 s when using 4.3 GB of RAM. Also, there is peak memory usage with 14.1 GB, which appears just before the start of the iterations. Parallel computation with 4 MPI processes took 2 m 53 s when using 8.4 GB of RAM. In that case the peak memory usage is about 22 GB.
>
I’m surprised that GAMG is converging so slowly. What do you mean by "ICC(1) sub-preconditioner"? Do you use that as a smoother or as a coarse level solver?
How many iterations are required to reach convergence?
Could you please maybe run the solver with -ksp_view -log_view and send us the output?
Most of the default parameters of GAMG should be good enough for 3D elasticity, provided that your MatNullSpace is correct.
One parameter that may need some adjustments though is the aggregation threshold -pc_gamg_threshold (you could try values in the [0.01; 0.1] range, that’s what I always use for elasticity problems).
Thanks,
Pierre
> Are there ways to avoid decreasing of the convergence rate for bjacobi precondtioner in parallel mode? Does it make sense to use hierarchical or nested krylov methods with a local gmres solver (sub_pc_type gmres) and some sub-precondtioner (for example, sub_pc_type bjacobi)?
>
>
> Is this peak memory usage expected for gamg preconditioner? is there any way to reduce it?
>
>
> What advice would you give to improve the convergence rate with multiple MPI processes, but keep memory consumption reasonable?
>
>
> Kind regards,
>
> Viktor Nazdrachev
>
> R&D senior researcher
>
> Geosteering Technologies LLC
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210901/d35391f0/attachment.html>
More information about the petsc-users
mailing list