<div dir="ltr">This looks like it might be noisy data. I'd make sure you run each size on the same set of nodes and you might run each job twice (A,B,A,B) in a job script.</div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Apr 10, 2019 at 8:12 AM Myriam Peyrounette via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#FFFFFF">
<p>Here is the time weak scaling from the same study. The 3.10.2
version seems to be much more stable with regard to the execution
time. But not necessarily faster for "large scale" simulations
(problem size = 1e8). <br>
</p>
<p>I didn't use -mat_freeintermediatedatastructures. I tested it
this morning and the solver diverges when using this option
(KSPReason -3).</p>
<p>Myriam<br>
</p>
<br>
<div class="gmail-m_-5528713790895177989moz-cite-prefix">Le 04/09/19 à 17:23, Zhang, Hong a
écrit :<br>
</div>
<blockquote type="cite">
<div dir="ltr">
<div>
<div><span style="color:rgb(33,33,33);font-family:Roboto,RobotoDraft,Helvetica,Arial,sans-serif;font-size:14px;letter-spacing:0.2px">Myriam,</span><br>
</div>
<div>Do you have 'execution time scalability' plot? Did you
use '-mat_freeintermediatedatastructures' for PETSc 3.10.2?</div>
<div>We made several computing optimizations on MatPtAP(),
which might trade memory for speed. It would be helpful to
see a complete comparison.</div>
<div>Hong</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr" class="gmail_attr">On Tue, Apr 9, 2019 at 7:43
AM Myriam Peyrounette via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>
wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi,<br>
<br>
in my first mail, I provided a memory scaling concerning the
PETSc<br>
example #42. You'll find attached the main files used (one
for PETSc<br>
3.6.4, one for PETSc 3.10.2), and the corresponding memory
scaling.<br>
<br>
In the main files, I modified the solver/preconditioner, so
that it<br>
corresponds to my problem. You'll find the modifications by
searching<br>
the keyword "TopBridge". In particular, I use GAMG.<br>
<br>
Note that the example is about solving Stokes equation, so
using GAMG<br>
may not be adapted. However, the memory gap appears and
that's the<br>
point. No matter if the results are correct.<br>
<br>
Are these scripts useful for you? Let me know.<br>
<br>
Thanks,<br>
<br>
Myriam<br>
<br>
<br>
Le 04/04/19 à 00:09, Jed Brown a écrit :<br>
> Myriam Peyrounette via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>
writes:<br>
><br>
>> Hi all,<br>
>><br>
>> for your information, you'll find attached the
comparison of the weak<br>
>> memory scalings when using :<br>
>><br>
>> - PETSc 3.6.4 (reference)<br>
>> - PETSc 3.10.4 without specific options<br>
>> - PETSc 3.10.4 with the three scalability options
you mentionned<br>
>><br>
>> Using the scalability options does improve the
memory scaling. However,<br>
>> the 3.6 version still has a better one...<br>
> Yes, this still looks significant. Is this an effect
we can still<br>
> reproduce with a PETSc example and/or using a memory
profiler (such as<br>
> massif or gperftools)? I think it's important for us
to narrow down<br>
> what causes this difference (looks like almost 2x on
your 1e8 problem<br>
> size) so we can fix.<br>
<br>
-- <br>
Myriam Peyrounette<br>
CNRS/IDRIS - HLST<br>
--<br>
<br>
</blockquote>
</div>
</div>
</blockquote>
<br>
<pre class="gmail-m_-5528713790895177989moz-signature" cols="72">--
Myriam Peyrounette
CNRS/IDRIS - HLST
--
</pre>
</div>
</blockquote></div>