<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 5, 2019 at 8:06 AM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Tue, Mar 5, 2019 at 7:14 AM Myriam Peyrounette <<a href="mailto:myriam.peyrounette@idris.fr" target="_blank">myriam.peyrounette@idris.fr</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Hi Matt,</p>
<p>I plotted the memory scalings using different threshold values.
The two scalings are slightly translated (from -22 to -88 mB) but
this gain is neglectable. The 3.6-scaling keeps being robust while
the 3.10-scaling deteriorates.</p>
<p>Do you have any other suggestion?</p>
<p></p></div></blockquote><div>Mark, what is the option she can give to output all the GAMG data?</div></div></div></blockquote><div><br></div><div>I think we did this and it was fine, or am I getting threads mixed up?</div><div><br></div><div>Use -info and grep on GAMG. This will print out the average nnz/row on each level, which is way of seeing if the coarse grids are getting out of control. But coarse grids are smaller so they should not be a big deal.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Also, run using -ksp_view. GAMG will report all the sizes of its grids, so it should be easy to see</div><div>if the coarse grid sizes are increasing, and also what the effect of the threshold value is.</div></div></div></blockquote><div><br></div><div>The coarse grid "sizes" will go way down, that is what MG does unless something is very wrong. The nnz/row will go up in many cases. If ksp_view prints out the nnz for the operator on each level then you can compute the average nnz/row (-info just does that for you).</div><div><br></div><div>The only change that I can think of in GAMG wrt coarsening was the treatment of the square graph parameters. It used to be a bool (square first level or not). Now it is an integer q (square the first q levels).</div><div><br></div><div>If you suspect GAMG, can you test with Hypre?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> Thanks,</div><div><br></div><div> Matt <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div bgcolor="#FFFFFF"><p>Thanks<br>
</p>
Myriam <br>
<br>
<div class="gmail-m_4681796056398229972gmail-m_-3242500023102749998moz-cite-prefix">Le 03/02/19 à 02:27, Matthew Knepley a
écrit :<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">On Fri, Mar 1, 2019 at 10:53 AM Myriam
Peyrounette via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>
wrote:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hi,<br>
<br>
I used to run my code with PETSc 3.6. Since I upgraded the
PETSc version<br>
to 3.10, this code has a bad memory scaling.<br>
<br>
To report this issue, I took the PETSc script ex42.c and
slightly<br>
modified it so that the KSP and PC configurations are the
same as in my<br>
code. In particular, I use a "personnalised" multi-grid
method. The<br>
modifications are indicated by the keyword "TopBridge" in
the attached<br>
scripts.<br>
<br>
To plot the memory (weak) scaling, I ran four calculations
for each<br>
script with increasing problem sizes and computations
cores:<br>
<br>
1. 100,000 elts on 4 cores<br>
2. 1 million elts on 40 cores<br>
3. 10 millions elts on 400 cores<br>
4. 100 millions elts on 4,000 cores<br>
<br>
The resulting graph is also attached. The scaling using
PETSc 3.10<br>
clearly deteriorates for large cases, while the one using
PETSc 3.6 is<br>
robust.<br>
<br>
After a few tests, I found that the scaling is mostly
sensitive to the<br>
use of the AMG method for the coarse grid (line 1780 in<br>
main_ex42_petsc36.cc). In particular, the performance
strongly<br>
deteriorates when commenting lines 1777 to 1790 (in
main_ex42_petsc36.cc).<br>
<br>
Do you have any idea of what changed between version 3.6
and version<br>
3.10 that may imply such degradation?<br>
</blockquote>
<div><br>
</div>
<div>I believe the default values for PCGAMG changed between
versions. It sounds like the coarsening rate</div>
<div>is not great enough, so that these grids are too large.
This can be set using:</div>
<div><br>
</div>
<div> <a href="https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html" target="_blank">https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCGAMGSetThreshold.html</a></div>
<div><br>
</div>
<div>There is some explanation of this effect on that page.
Let us know if setting this does not correct the
situation.</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Let me know if you need further information.<br>
<br>
Best,<br>
<br>
Myriam Peyrounette<br>
<br>
<br>
-- <br>
Myriam Peyrounette<br>
CNRS/IDRIS - HLST<br>
--<br>
<br>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_4681796056398229972gmail-m_-3242500023102749998gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted
before they begin their experiments is
infinitely more interesting than any results to
which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/%7Eknepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
<br>
<pre class="gmail-m_4681796056398229972gmail-m_-3242500023102749998moz-signature" cols="72">--
Myriam Peyrounette
CNRS/IDRIS - HLST
--
</pre>
</div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail-m_4681796056398229972gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>