<div dir="ltr"><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span style="color:black;font-family:"Courier New";font-size:10pt">Hello, Pierre!</span><br></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">
Thank you for your response!</span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">I attached log files (txt files with convergence
behavior and RAM usage log in separate txt files) and resulting table with
convergence investigation data(xls). Data for main non-regular grid with 500K cells
and heterogeneous properties are in 500K folder, whereas data for simple
uniform 125K cells grid with constant properties are in 125K folder. </span></p>
<p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%;color:black"> </span></p>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px"><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">>Dear Viktor,</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">> </span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">>><i> On 1 Sep 2021, at 10:42 AM, </i></span><i><span style="font-size:10pt;font-family:"Courier New";color:black">Наздрачёв</span></i><i><span style="font-size:10pt;font-family:"Courier New";color:black"> </span></i><i><span style="font-size:10pt;font-family:"Courier New";color:black">Виктор</span></i><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> <</span></i><i><span style="font-size:10pt;font-family:"Courier New";color:black"><a href="https://lists.mcs.anl.gov/mailman/listinfo/petsc-users"><span lang="EN-US" style="color:black">numbersixvs at
gmail.com</span></a></span></i><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">> <a name="_Hlk81487196">></a>wrote:</span></i></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i> </i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i>
Dear all,</i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i> </i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i> I
have a 3D elasticity problem with heterogeneous properties. There is
unstructured grid with aspect ratio varied from 4 to 25. Zero Dirichlet
BCs are imposed on bottom face of mesh.
Also, Neumann (traction) BCs are imposed on side faces. Gravity load is also
accounted for. The grid I use consists of 500k cells (which is approximately
1.6M of DOFs).</i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i> </i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i>
The best performance and memory usage for single MPI process was obtained with
HPDDM(BFBCG) solver</i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i> </i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Block
Krylov solvers are (most often) only useful if you have multiple right-hand
sides, e.g., in the context of elasticity, multiple loadings.</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Is that really the case? If not, you may as well stick
to “standard” CG instead of the breakdown-free block (BFB) variant.</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">> </span></i></p></blockquote>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm;text-align:justify;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">In that case only single right-hand side is utilized,
so I switched to “standard” cg solver (-ksp_hpddm_type cg), but I noticed the interesting
convergence behavior. For non-regular grid with 500K cells and heterogeneous
properties CG solver converged with 1
iteration (log_hpddm(cg)_gamg_nearnullspace_1_mpi.txt), but for more simple
uniform grid with 125K cells and homogeneous properties CG solves linear system
successfully(log_hpddm(cg)_gamg_nearnullspace_1_mpi.txt).</span></p>
<p class="MsoNormal" style="margin:0cm;text-align:justify;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">BFBCG solver works properly for both grids.</span></p>
<p class="MsoNormal" style="margin:0cm;text-align:justify;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px"><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i>
and bjacobian + ICC (1) in subdomains as preconditioner, it took 1 m 45 s and
RAM 5.0 GB. Parallel computation with 4 MPI processes took 2 m 46 s when using
5.6 GB of RAM. This because of number of iterations required to achieve the
same tolerance is significantly increased.</i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i> </i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i>
I`ve also tried PCGAMG (agg) preconditioner with IC</i></span><i><span style="font-size:10pt;font-family:"Courier New";color:black">С</span></i><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> (1) sub-precondtioner. For single MPI process, the
calculation took 10 min and 3.4 GB of RAM. To improve the convergence rate, the
nullspace was attached using MatNullSpaceCreateRigidBody and
MatSetNearNullSpace subroutines. This
has reduced calculation time to 3 m 58 s when using 4.3 GB of RAM. Also, there
is peak memory usage with 14.1 GB, which appears just before the start of the
iterations. Parallel computation with 4 MPI processes took 2 m 53 s when using
8.4 GB of RAM. In that case the peak memory usage is about 22 GB.</span></i></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">><i>> </i></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">I’m
surprised that GAMG is converging so slowly. What do you mean by "ICC(1)
sub-preconditioner"? Do you use that as a smoother or as a coarse level
solver?</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p></blockquote>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"><br></span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Sorry for misleading, ICC is used only for BJACOBI preconditioner,
no ICC for GAMG.</span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px"><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">How many
iterations are required to reach convergence?</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Could you
please maybe run the solver with -ksp_view -log_view and send us the output?</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p></blockquote>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm;text-align:justify;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">For case with 4 MPI processes and attached nullspace
it is required 177 iterations to reach convergence (you may see detailed log in
log_hpddm(bfbcg)_gamg_nearnullspace_4_mpi.txt and memory usage log in RAM_log_hpddm(bfbcg)_gamg_nearnullspace_4_mpi.txt).
For comparison, 90 iterations are required for sequential run(log_hpddm(bfbcg)_gamg_nearnullspace_1_mpi.txt).
</span></p>
<p class="MsoNormal" style="margin:0cm;text-align:justify;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"><br>
<br>
</span></p>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px"><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Most of
the default parameters of GAMG should be good enough for 3D elasticity,
provided that your MatNullSpace is correct.</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p></blockquote>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">How can I be sure that nullspace is attached
correctly? Is there any way for self-checking (Well perhaps calculate some
parameters using matrix and solution vector)? </span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px"><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">></span></i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">One
parameter that may need some adjustments though is the aggregation threshold
-pc_gamg_threshold (you could try values in the [0.01; 0.1] range, that’s what
I always use for elasticity problems).</span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><i><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">> </span></i></p></blockquote>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;line-height:107%;font-family:"Courier New";color:black">Tried to
find optimal value of this option, set -pc_gamg_threshold 0.01 and -</span><span lang="EN-US" style="font-size:10.5pt;line-height:107%;font-family:"Courier New";color:black">pc_gamg_threshold_scale 2, but
I didn't notice any significant changes (Need more time for experiments ) </span></p><p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10.5pt;line-height:107%;font-family:"Courier New";color:black"><br></span></p><p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Kind regards,</span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">Viktor Nazdrachev</span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black">R&D senior researcher</span></p>
<p class="MsoNormal" style="margin:0cm;line-height:normal;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10pt;font-family:"Courier New";color:black"> </span></p>
<p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:10.5pt;line-height:107%;font-family:"Courier New";color:black"><span style="font-size:10pt">Geosteering Technologies LLC</span> </span></p></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">ср, 1 сент. 2021 г. в 12:01, Pierre Jolivet <<a href="mailto:pierre@joliv.et">pierre@joliv.et</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div style="overflow-wrap: break-word;">Dear Viktor,<br><div><br><blockquote type="cite"><div>On 1 Sep 2021, at 10:42 AM, Наздрачёв Виктор <<a href="mailto:numbersixvs@gmail.com" target="_blank">numbersixvs@gmail.com</a>> wrote:</div><br><div><div dir="ltr"><p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">Dear all,</span></p><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">I have a 3D
elasticity problem with heterogeneous properties. There is unstructured grid
with aspect ratio varied from 4 to 25. Zero Dirichlet BCs are imposed on bottom face of mesh. Also,
Neumann (traction) BCs are imposed on side faces. Gravity load is also
accounted for. The grid I use consists of 500k cells (which is approximately
1.6M of DOFs).</span></p><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">The best
performance and memory usage for single MPI process was obtained with
HPDDM(BFBCG) solver</span></p></div></div></blockquote><div>Block Krylov solvers are (most often) only useful if you have multiple right-hand sides, e.g., in the context of elasticity, multiple loadings.</div><div>Is that really the case? If not, you may as well stick to “standard” CG instead of the breakdown-free block (BFB) variant.</div><br><blockquote type="cite"><div><div dir="ltr"><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%"> and bjacobian + ICC (1) in subdomains as preconditioner, it
took 1 m 45 s and RAM 5.0 GB. Parallel computation with 4 MPI processes took 2
m 46 s when using 5.6 GB of RAM. This because of number of iterations required
to achieve the same tolerance is significantly increased.</span></p><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">I`ve also
tried PCGAMG (agg) preconditioner with IC</span><span style="font-size:14pt;line-height:107%">С</span><span lang="EN-US" style="font-size:14pt;line-height:107%"> (1) sub-precondtioner. For single MPI process, the
calculation took 10 min and 3.4 GB of RAM. To improve the convergence rate, the
nullspace was attached using MatNullSpaceCreateRigidBody and MatSetNearNullSpace
subroutines. This has reduced
calculation time to 3 m 58 s when using 4.3 GB of RAM. Also, there is peak
memory usage with 14.1 GB, which appears just before the start of the
iterations. Parallel computation with 4 MPI processes took 2 m 53 s when using
8.4 GB of RAM. In that case the peak memory usage is about 22 GB.</span></p></div></div></blockquote><div>I’m surprised that GAMG is converging so slowly. What do you mean by "ICC(1) sub-preconditioner"? Do you use that as a smoother or as a coarse level solver?</div><div>How many iterations are required to reach convergence?</div><div>Could you please maybe run the solver with -ksp_view -log_view and send us the output?</div><div>Most of the default parameters of GAMG should be good enough for 3D elasticity, provided that your MatNullSpace is correct.</div><div>One parameter that may need some adjustments though is the aggregation threshold -pc_gamg_threshold (you could try values in the [0.01; 0.1] range, that’s what I always use for elasticity problems).</div><div><br></div><div>Thanks,</div><div>Pierre</div><div><br></div><blockquote type="cite"><div><div dir="ltr"><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">Are there
ways to avoid decreasing of the convergence rate for bjacobi precondtioner in
parallel mode? Does it make sense to use hierarchical or nested krylov methods
with a local gmres solver (sub_pc_type gmres) and some sub-precondtioner (for
example, sub_pc_type bjacobi)?</span></p><div style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%"> </span><br></div><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">Is this peak
memory usage expected for gamg preconditioner? is there any way to reduce it?</span></p><div style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%"> </span><br></div><p class="MsoNormal" style="text-align:justify;margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">What advice
would you give to improve the convergence rate with multiple MPI processes, but
keep memory consumption reasonable?</span></p><div style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%"> </span><br></div><p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">Kind regards,</span></p><p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">Viktor Nazdrachev</span></p><p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">R&D senior researcher</span></p><p class="MsoNormal" style="margin:0cm 0cm 8pt;line-height:107%;font-size:11pt;font-family:Calibri,sans-serif"><span lang="EN-US" style="font-size:14pt;line-height:107%">Geosteering Technologies LLC</span></p></div>
</div></blockquote></div><br></div></blockquote></div>