<div dir="ltr">as far as GAMG:<div><br></div><div>* Pierre is right, start with the defaults. AMG does take tuning. 2D and 3D are very different, among other things. You can run with '-info :pc', which is very noisy and grep on "GAMG" and send me the result. (Oh Lawrence recommend this, just send it)</div><div> - ICC is not good because it has to scale the diagonal to avoid negative pivots (even for SPD matrices that are not M matrices at least). This is probably a problem.</div><div> - As Lawrence indicates, jumps in coefficients can be hard for generic AMG. </div><div> - And yes, -pc_gamg_threshold is an important parameter for homogeneous problems and can be additionally important for inhomogeneous problems to get the AMG method to "see" your jumps.</div><div><br></div><div>* The memory problems are from squaring the graph, among other things, which you usually need to do for elasticity unless you have high order elements, maybe.</div><div><br></div><div>* You can try PCBDDC, DD methods are nice for elasticity.</div><div><br></div><div>* You can try hypre. Good solver but 3D elasticity is not its strength.</div><div><br></div><div>* As far as poor scaling, you have large subdomains, I assume the load balancing is decent, and the network is not crazy. This might be a lot of setup cost. Run with -log_view and look at the KSPSolve and MatPtAP... </div><div> - The solver will call the setup (MatPtAP), if it has not been done yet, so that it gets folded in. You can call KSPSetup() before KSPSolve() to get the timings separated. I you are using the solver (eg, not full Newton) then the setup gets amortized.</div><div><br></div><div>Mark</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Sep 1, 2021 at 5:02 AM Lawrence Mitchell <<a href="mailto:wence@gmx.li">wence@gmx.li</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
<br>
> On 1 Sep 2021, at 09:42, Наздрачёв Виктор <<a href="mailto:numbersixvs@gmail.com" target="_blank">numbersixvs@gmail.com</a>> wrote:<br>
> <br>
> I have a 3D elasticity problem with heterogeneous properties.<br>
<br>
What does your coefficient variation look like? How large is the contrast?<br>
<br>
> There is unstructured grid with aspect ratio varied from 4 to 25. Zero Dirichlet BCs are imposed on bottom face of mesh. Also, Neumann (traction) BCs are imposed on side faces. Gravity load is also accounted for. The grid I use consists of 500k cells (which is approximately 1.6M of DOFs).<br>
> <br>
> The best performance and memory usage for single MPI process was obtained with HPDDM(BFBCG) solver and bjacobian + ICC (1) in subdomains as preconditioner, it took 1 m 45 s and RAM 5.0 GB. Parallel computation with 4 MPI processes took 2 m 46 s when using 5.6 GB of RAM. This because of number of iterations required to achieve the same tolerance is significantly increased.<br>
<br>
How many iterations do you have in serial (and then in parallel)?<br>
<br>
> I`ve also tried PCGAMG (agg) preconditioner with ICС (1) sub-precondtioner. For single MPI process, the calculation took 10 min and 3.4 GB of RAM. To improve the convergence rate, the nullspace was attached using MatNullSpaceCreateRigidBody and MatSetNearNullSpace subroutines. This has reduced calculation time to 3 m 58 s when using 4.3 GB of RAM. Also, there is peak memory usage with 14.1 GB, which appears just before the start of the iterations. Parallel computation with 4 MPI processes took 2 m 53 s when using 8.4 GB of RAM. In that case the peak memory usage is about 22 GB.<br>
<br>
Does the number of iterates increase in parallel? Again, how many iterations do you have?<br>
<br>
> Are there ways to avoid decreasing of the convergence rate for bjacobi precondtioner in parallel mode? Does it make sense to use hierarchical or nested krylov methods with a local gmres solver (sub_pc_type gmres) and some sub-precondtioner (for example, sub_pc_type bjacobi)?<br>
<br>
bjacobi is only a one-level method, so you would not expect process-independent convergence rate for this kind of problem. If the coefficient variation is not too extreme, then I would expect GAMG (or some other smoothed aggregation package, perhaps -pc_type ml (you need --download-ml)) would work well with some tuning.<br>
<br>
If you have extremely high contrast coefficients you might need something with stronger coarse grids. If you can assemble so-called Neumann matrices (<a href="https://petsc.org/release/docs/manualpages/Mat/MATIS.html#MATIS" rel="noreferrer" target="_blank">https://petsc.org/release/docs/manualpages/Mat/MATIS.html#MATIS</a>) then you could try the geneo scheme offered by PCHPDDM.<br>
<br>
> Is this peak memory usage expected for gamg preconditioner? is there any way to reduce it?<br>
<br>
I think that peak memory usage comes from building the coarse grids. Can you run with `-info` and grep for GAMG, this will provide some output that more expert GAMG users can interpret.<br>
<br>
Lawrence<br>
<br>
</blockquote></div>