<div dir="ltr">Let me add that generic AMG is not great for systems like this (indefinite, asymmetric) so yes, check that your good cases are really good.<div><br></div><div>GAMG uses eigenvalues, which are problematic for indefinite and asymmetric matrices. I don't know why this is ever working well, but try '-pc_type hypre' (and configure with --download-hypre'). Hypre is better with asymmetric matrices. This would provide useful information to diagnose what is going on here if not solve your problem.</div><div><br></div><div>Note, the algorithms and implementations of hypre and GAMG are not very domain decomposition dependant so it is surprising to see these huge differences from the number of processors used.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Nov 30, 2019 at 1:49 AM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
I would first run with -ksp_monitor_true_residual -ksp_converged_reason to make sure that those "very fast" cases are actually converging in those runs also use -ksp_view to see what the GMAG parameters are. Also use the -info option to have it print details on the solution process. <br>
<br>
Barry<br>
<br>
<br>
<br>
> On Nov 29, 2019, at 4:14 PM, Felipe Giacomelli <<a href="mailto:fe.wallner@gmail.com" target="_blank">fe.wallner@gmail.com</a>> wrote:<br>
> <br>
> Hello,<br>
> <br>
> I'm trying to solve Biot's poroelasticity (Cryer's sphere problem) through a fully coupled scheme. Thus, the solution of a single linear system yields both displacement and pressure fields,<br>
> <br>
> |K L | | u | = |b_u|.<br>
> |Q (A + H) | | p | = |b_p|<br>
> <br>
> The linear system is asymmetric, given that the discrete equations were obtained through the Element based Finite Volume Method (EbFVM). An unstructured tetrahedral grid is utilised, it has about 10000 nodal points (not coarse, nor too refined). Therefore, GMRES and GAMG are employed to solve it.<br>
> <br>
> Furthermore, the program was parallelised through a Domain Decomposition Method. Thus, each processor works in its subdomain only.<br>
> <br>
> So far, so good. For a given set of poroelastic properties (which are constant throughout time and space), the speedup increases as more processors are utilised:<br>
> <br>
> coupling intensity: 7.51e-01<br>
> <br>
> proc solve time [s]<br>
> 1 314.23<br>
> 2 171.65<br>
> 3 143.21<br>
> 4 149.26 (> 143.21, but ok)<br>
> <br>
> However, after making the problem MORE coupled (different poroelastic properties), a strange behavior is observed:<br>
> <br>
> coupling intensity: 2.29e+01<br>
> <br>
> proc solve time [s]<br>
> 1 28909.35<br>
> 2 192.39<br>
> 3 181.29<br>
> 4 14463.63<br>
> <br>
> Recalling that GMRES and GAMG are used, KSP takes about 4300 iterations to converge when 1 processor is employed. On the other hand, for 2 processors, KSP takes around 30 iterations to reach convergence. Hence, explaining the difference between the solution times.<br>
> <br>
> Increasing the coupling even MORE, everything goes as expected:<br>
> <br>
> coupling intensity: 4.63e+01<br>
> <br>
> proc solve time [s]<br>
> 1 229.26<br>
> 2 146.04<br>
> 3 121.49<br>
> 4 107.80<br>
> <br>
> Because of this, I ask:<br>
> <br>
> * What may be the source of this behavior? Can it be predicted?<br>
> * How can I remedy this situation?<br>
> <br>
> At last, are there better solver-pc choices for coupled poroelasticity?<br>
> <br>
> Thank you,<br>
> Felipe<br>
<br>
</blockquote></div>