<div dir="ltr"><div dir="ltr">On Tue, Mar 5, 2019 at 11:53 AM Myriam Peyrounette <<a href="mailto:myriam.peyrounette@idris.fr">myriam.peyrounette@idris.fr</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>I used PCView to display the size of the linear system in each
level of the MG. You'll find the outputs attached to this mail
(zip file) for both the default threshold value and a value of
0.1, and for both 3.6 and 3.10 PETSc versions. <br>
</p>
<p>For convenience, I summarized the information in a graph, also
attached (png file).</p>
<p></p></div></blockquote><div>Great! Can you draw lines for the different runs you did? My interpretation was that memory was increasing</div><div>as you did larger runs, and that you though that was coming from GAMG. That means the curves should</div><div>be pushed up for larger runs. Do you see that?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF"><p>As you can see, there are slight differences between the two
versions but none is critical, in my opinion. Do you see anything
suspicious in the outputs?</p>
<p>+ I can't find the default threshold value. Do you know where I
can find it?<br>
</p>
<p>Thanks for the follow-up</p>
<p>Myriam<br>
</p>
<br>
<div class="gmail-m_425689025704922590moz-cite-prefix">Le 03/05/19 à 14:06, Matthew Knepley a
écrit :<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">On Tue, Mar 5, 2019 at 7:14 AM Myriam Peyrounette
<<a href="mailto:myriam.peyrounette@idris.fr" target="_blank">myriam.peyrounette@idris.fr</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Hi Matt,</p>
<p>I plotted the memory scalings using different threshold
values. The two scalings are slightly translated (from
-22 to -88 mB) but this gain is neglectable. The
3.6-scaling keeps being robust while the 3.10-scaling
deteriorates.</p>
<p>Do you have any other suggestion?</p>
</div>
</blockquote>
<div>Mark, what is the option she can give to output all the
GAMG data?</div>
<div><br>
</div>
<div>Also, run using -ksp_view. GAMG will report all the sizes
of its grids, so it should be easy to see</div>
<div>if the coarse grid sizes are increasing, and also what
the effect of the threshold value is.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt <br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Thanks<br>
</p>
Myriam <br>
<br>
<div class="gmail-m_425689025704922590gmail-m_-3242500023102749998moz-cite-prefix">Le
03/02/19 à 02:27, Matthew Knepley a écrit :<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">On Fri, Mar 1, 2019 at 10:53 AM
Myriam Peyrounette via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
I used to run my code with PETSc 3.6. Since I
upgraded the PETSc version<br>
to 3.10, this code has a bad memory scaling.<br>
<br>
To report this issue, I took the PETSc script
ex42.c and slightly<br>
modified it so that the KSP and PC
configurations are the same as in my<br>
code. In particular, I use a "personnalised"
multi-grid method. The<br>
modifications are indicated by the keyword
"TopBridge" in the attached<br>
scripts.<br>
<br>
To plot the memory (weak) scaling, I ran four
calculations for each<br>
script with increasing problem sizes and
computations cores:<br>
<br>
1. 100,000 elts on 4 cores<br>
2. 1 million elts on 40 cores<br>
3. 10 millions elts on 400 cores<br>
4. 100 millions elts on 4,000 cores<br>
<br>
The resulting graph is also attached. The
scaling using PETSc 3.10<br>
clearly deteriorates for large cases, while the
one using PETSc 3.6 is<br>
robust.<br>
<br>
After a few tests, I found that the scaling is
mostly sensitive to the<br>
use of the AMG method for the coarse grid (line
1780 in<br>
main_ex42_petsc36.cc). In particular, the
performance strongly<br>
deteriorates when commenting lines 1777 to 1790
(in main_ex42_petsc36.cc).<br>
<br>
Do you have any idea of what changed between
version 3.6 and version<br>
3.10 that may imply such degradation?<br>
</blockquote>
<div><br>
</div>
<div>I believe the default values for PCGAMG
changed between versions. It sounds like the
coarsening rate</div>
<div>is not great enough, so that these grids are
too large. This can be set using:</div>
<div><br>
</div>
<div> <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html" target="_blank">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html</a></div>
<div><br>
</div>
<div>There is some explanation of this effect on
that page. Let us know if setting this does not
correct the situation.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"> Let me know
if you need further information.<br>
<br>
Best,<br>
<br>
Myriam Peyrounette<br>
<br>
<br>
-- <br>
Myriam Peyrounette<br>
CNRS/IDRIS - HLST<br>
--<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_425689025704922590gmail-m_-3242500023102749998gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for
granted before they begin their
experiments is infinitely more
interesting than any results to which
their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/%7Eknepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<pre class="gmail-m_425689025704922590gmail-m_-3242500023102749998moz-signature" cols="72">--
Myriam Peyrounette
CNRS/IDRIS - HLST
--
</pre>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_425689025704922590gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before
they begin their experiments is infinitely more
interesting than any results to which their
experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/%7Eknepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<pre class="gmail-m_425689025704922590moz-signature" cols="72">--
Myriam Peyrounette
CNRS/IDRIS - HLST
--
</pre>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>