<head><!-- BaNnErBlUrFlE-HeAdEr-start -->
<style>
#pfptBanneraka0tyc { all: revert !important; display: block !important;
visibility: visible !important; opacity: 1 !important;
background-color: #D0D8DC !important;
max-width: none !important; max-height: none !important }
.pfptPrimaryButtonaka0tyc:hover, .pfptPrimaryButtonaka0tyc:focus {
background-color: #b4c1c7 !important; }
.pfptPrimaryButtonaka0tyc:active {
background-color: #90a4ae !important; }
</style>
<!-- BaNnErBlUrFlE-HeAdEr-end -->
</head><!-- BaNnErBlUrFlE-BoDy-start -->
<!-- Preheader Text : BEGIN -->
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">
On Fri, Apr 19, 2024 at 5: 06 PM Ashish Patel <ashish. patel@ ansys. com> wrote: Hi Matt, That seems to be a PCSetup specific outcome. If I use option2 where "-pc_gamg_aggressive_square_graph false" then PetscMatStashSpaceGet for
</div>
<!-- Preheader Text : END -->
<!-- Email Banner : BEGIN -->
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">ZjQcmQRYFpfptBannerStart</div>
<!--[if ((ie)|(mso))]>
<table border="0" cellspacing="0" cellpadding="0" width="100%" style="padding: 16px 0px 16px 0px; direction: ltr" ><tr><td>
<table border="0" cellspacing="0" cellpadding="0" style="padding: 0px 10px 5px 6px; width: 100%; border-radius:4px; border-top:4px solid #90a4ae;background-color:#D0D8DC;"><tr><td valign="top">
<table align="left" border="0" cellspacing="0" cellpadding="0" style="padding: 4px 8px 4px 8px">
<tr><td style="color:#000000; font-family: 'Arial', sans-serif; font-weight:bold; font-size:14px; direction: ltr">
This Message Is From an External Sender
</td></tr>
<tr><td style="color:#000000; font-weight:normal; font-family: 'Arial', sans-serif; font-size:12px; direction: ltr">
This message came from outside your organization.
</td></tr>
</table>
</td></tr></table>
</td></tr></table>
<![endif]-->
<![if !((ie)|(mso))]>
<div dir="ltr" id="pfptBanneraka0tyc" style="all: revert !important; display:block !important; text-align: left !important; margin:16px 0px 16px 0px !important; padding:8px 16px 8px 16px !important; border-radius: 4px !important; min-width: 200px !important; background-color: #D0D8DC !important; background-color: #D0D8DC; border-top: 4px solid #90a4ae !important; border-top: 4px solid #90a4ae;">
<div id="pfptBanneraka0tyc" style="all: unset !important; float:left !important; display:block !important; margin: 0px 0px 1px 0px !important; max-width: 600px !important;">
<div id="pfptBanneraka0tyc" style="all: unset !important; display:block !important; visibility: visible !important; background-color: #D0D8DC !important; color:#000000 !important; color:#000000; font-family: 'Arial', sans-serif !important; font-family: 'Arial', sans-serif; font-weight:bold !important; font-weight:bold; font-size:14px !important; line-height:18px !important; line-height:18px">
This Message Is From an External Sender
</div>
<div id="pfptBanneraka0tyc" style="all: unset !important; display:block !important; visibility: visible !important; background-color: #D0D8DC !important; color:#000000 !important; color:#000000; font-weight:normal; font-family: 'Arial', sans-serif !important; font-family: 'Arial', sans-serif; font-size:12px !important; line-height:18px !important; line-height:18px; margin-top:2px !important;">
This message came from outside your organization.
</div>
</div>
<div style="clear: both !important; display: block !important; visibility: hidden !important; line-height: 0 !important; font-size: 0.01px !important; height: 0px"> </div>
</div>
<![endif]>
<div style="display:none !important;display:none;visibility:hidden;mso-hide:all;font-size:1px;color:#ffffff;line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;">ZjQcmQRYFpfptBannerEnd</div>
<!-- Email Banner : END -->
<!-- BaNnErBlUrFlE-BoDy-end -->
<div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 19, 2024 at 5:06 PM Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>
<div dir="ltr">
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Hi Matt,</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;color:rgb(0,0,0)">
<span style="font-size:12pt">That seems to be a PCSetup specific outcome. If I use option2 where "-pc_gamg_aggressive_square_graph false" then
</span><span style="font-size:16px;background-color:rgb(255,255,255)">PetscMatStashSpaceGet</span><span style="font-size:12pt"> for rank0 falls down to 40 MB, with hypre (option4) I don't get that function at all. Have attached logs for both.</span></div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br></div></div></div></blockquote><div><br></div><div>The "square" graph uses Mat-Mat-Mult on the graph of the matrix, and hypre does not.</div><div><div>The coarse grid space "smoother" (SA) also uses a Mat-Mat-Mult and hypre's AMG algorithm does not.</div></div><div>I suspect hypre has a custom coarsener that coarsens "aggressively" and is memory efficient.</div><div>Mat-Mat-Mult is brute force and slow on CPU, but not bad on GPUs as I recall, which was the reason that I put in the new aggressive coarsener last year that uses a different algorithm.<br></div><div><br></div><div><div>And with square graph:</div><div></div></div><div><div><br></div><div>[0] 2 7 694 605 360 MatStashSortCompress_Private()</div></div><div>That is enormous. I imagine MatStashSortCompress_Private could use some love.</div><div><br></div><div><font face="monospace">[0] 2 7 694 605 360 MatStashSortCompress_Private()<br></font></div><div><font face="monospace">[1] 2 100 354 160 MatStashSortCompress_Private()<br></font></div><div>Ditto on love</div><div><br></div><div>The "GAMG" metadata looks fine, but the grid complexities are a bit high, but that is a detail.</div><div>These look like linear tetrahedral elements, which coarsen a bit slow even with aggressive coarsening but it's not that bad.</div><div>Linear tets are pretty bad for elasticity and P2 would coarsen faster naturally.</div><div><br></div><div>There is some load imbalance in the memory of matrices:</div><div><font face="monospace">[0] 40 1 218 367 920 MatSeqAIJSetPreallocation_SeqAIJ()</font></div><div><font face="monospace">[1] 40 677 649 104 MatSeqAIJSetPreallocation_SeqAIJ()</font></div><div>The coarse grid is on processor 0, but it is not that large. Like 5Mb. </div><div>Not sure what that is about.</div><div><br></div><div>It is not clear to me why the Mat-Mat in square graph is such a problem but the Mat-Mat-Mat in the RAP and the Mat-Mat in SA are not catastrophic, but maybe I am not interpreting this correctly.</div><div>Regardless it looks like the Mat-Mat methods could use some attention.</div><div><br></div><div>Thanks,</div><div>Mark</div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div dir="ltr"><div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Thanks,</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Ashish</div>
<div id="m_4645911151328523281m_2097656773122249580appendonsend"></div>
<hr style="display:inline-block;width:98%">
<div id="m_4645911151328523281m_2097656773122249580divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>><br>
<b>Sent:</b> Friday, April 19, 2024 1:04 PM<br>
<b>To:</b> Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>><br>
<b>Cc:</b> Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>>; Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>; PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>>; Scott McClennan <<a href="mailto:scott.mcclennan@ansys.com" target="_blank">scott.mcclennan@ansys.com</a>><br>
<b>Subject:</b> Re: [petsc-users] About recent changes in GAMG</font>
<div> </div>
</div>
<div>
<div style="border:1pt solid rgb(156,101,0);padding:2pt">
<p style="line-height:12pt;background:rgb(255,235,156)"><b><span lang="EN-US" style="font-size:10pt;color:red">[External Sender]</span></b></p>
</div>
<div>
<div dir="ltr">
<div dir="ltr">On Fri, Apr 19, 2024 at 3:52 PM Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>> wrote:<br>
</div>
<div>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div style="font-size:1px;color:rgb(255,255,255);line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;display:none">
Hi Jed, VmRss is on a higher side and seems to match what PetscMallocGetMaximumUsage is reporting. HugetlbPages was 0 for me. Mark, running without the near nullspace also gives similar results. I have attached the malloc_view and gamg info
</div>
<div style="font-size:1px;color:rgb(255,255,255);line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;display:none">
ZjQcmQRYFpfptBannerStart</div>
<u></u>
<div dir="ltr" id="m_4645911151328523281m_2097656773122249580x_m_-407535567221798721pfptBannerza3uj56" style="display:block;text-align:left;margin:16px 0px;padding:8px 16px;border-radius:4px;min-width:200px;background-color:rgb(208,216,220);border-top:4px solid rgb(144,164,174)">
<div id="m_4645911151328523281m_2097656773122249580x_m_-407535567221798721pfptBannerza3uj56" style="float:left;display:block;margin:0px 0px 1px;max-width:600px">
<div id="m_4645911151328523281m_2097656773122249580x_m_-407535567221798721pfptBannerza3uj56" style="display:block;background-color:rgb(208,216,220);color:rgb(0,0,0);font-family:Arial,sans-serif;font-weight:bold;font-size:14px;line-height:18px">
This Message Is From an External Sender </div>
<div id="m_4645911151328523281m_2097656773122249580x_m_-407535567221798721pfptBannerza3uj56" style="font-weight:normal;display:block;background-color:rgb(208,216,220);color:rgb(0,0,0);font-family:Arial,sans-serif;font-size:12px;line-height:18px;margin-top:2px">
This message came from outside your organization. </div>
</div>
<div style="height:0px;clear:both;display:block;line-height:0;font-size:0.01px">
</div>
</div>
<u></u>
<div style="font-size:1px;color:rgb(255,255,255);line-height:1px;height:0px;max-height:0px;opacity:0;overflow:hidden;display:none">
ZjQcmQRYFpfptBannerEnd</div>
<div dir="ltr">
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Hi Jed,</div>
<div><span style="font-size:14.6667px;color:rgb(36,36,36);background-color:rgb(255,255,255)">VmRss</span><span style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> is on a higher
side and seems to match what </span><span style="font-size:14.6667px;color:rgb(36,36,36);background-color:rgb(255,255,255)">PetscMallocGetMaximumUsage</span><span style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> is
reporting. </span><span style="font-size:14.6667px;color:rgb(36,36,36);background-color:rgb(255,255,255)">HugetlbPages</span><span style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> was
0 for me.</span></div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Mark, running without the near nullspace also gives similar results. I have attached the malloc_view and gamg info for serial and 2 core runs. Some of the standout functions on rank 0 for parallel run seems to be</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
5.3 GB MatSeqAIJSetPreallocation_SeqAIJ<br>
7.7 GB MatStashSortCompress_Private</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
10.1 GB PetscMatStashSpaceGet</div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>This is strange. We would expect the MatStash to be much smaller than the allocation, but it is larger.</div>
<div>That suggests that you are sending a large number of off-process values. Is this by design?</div>
<div><br>
</div>
<div> Thanks,</div>
<div><br>
</div>
<div> Matt</div>
<div> </div>
<blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div dir="ltr">
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
7.7 GB PetscSegBufferAlloc_Private</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
malloc_view also says the following<br>
[0] Maximum memory PetscMalloc()ed 32387548912 maximum size of entire process 8270635008</div>
<div><span style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">which fits the
</span><span style="font-size:14.6667px;color:rgb(36,36,36);background-color:rgb(255,255,255)">PetscMallocGetMaximumUsage</span><span style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> >
</span><span style="font-size:14.6667px;color:rgb(36,36,36);background-color:rgb(255,255,255)">PetscMemoryGetMaximumUsage</span><span style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)"> output.</span></div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Let me know if you need some other info.</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Thanks,</div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
Ashish<span style="display:inline-block"><span></span></span></div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<span style="display:inline-block"><span></span></span><span style="display:inline-block"><span></span></span></div>
<div style="font-family:Aptos,Aptos_EmbeddedFont,Aptos_MSFontService,Calibri,Helvetica,sans-serif;font-size:12pt;color:rgb(0,0,0)">
<br>
</div>
<div id="m_4645911151328523281m_2097656773122249580x_m_-407535567221798721appendonsend"></div>
<hr style="display:inline-block;width:98%">
<div id="m_4645911151328523281m_2097656773122249580x_m_-407535567221798721divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" color="#000000" style="font-size:11pt"><b>From:</b> Jed Brown <<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>><br>
<b>Sent:</b> Thursday, April 18, 2024 2:16 PM<br>
<b>To:</b> Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>>; Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>>; PETSc users list <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>
<b>Cc:</b> Scott McClennan <<a href="mailto:scott.mcclennan@ansys.com" target="_blank">scott.mcclennan@ansys.com</a>><br>
<b>Subject:</b> Re: [petsc-users] About recent changes in GAMG</font>
<div> </div>
</div>
<div><font size="2"><span style="font-size:11pt">
<div>[External Sender]<br>
<br>
Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> writes:<br>
<br>
>>> Yea, my interpretation of these methods is also that "<br>
> PetscMemoryGetMaximumUsage" should be >= "PetscMallocGetMaximumUsage".<br>
>>> But you are seeing the opposite.<br>
><br>
><br>
> We are using PETSc main and have found a case where memory consumption<br>
> explodes in parallel.<br>
> Also, we see a non-negligible difference between PetscMemoryGetMaximumUsage()<br>
> and PetscMallocGetMaximumUsage().<br>
> Running in serial through /usr/bin/time, the max. resident set size matches<br>
> the PetscMallocGetMaximumUsage() result.<br>
> I would have expected it to match PetscMemoryGetMaximumUsage() instead.<br>
<br>
PetscMemoryGetMaximumUsage uses procfs (if PETSC_USE_PROCFS_FOR_SIZE, which should be typical on Linux anyway) in PetscHeaderDestroy to update a static variable. If you haven't destroyed an object yet, its value will be nonsense.<br>
<br>
If your program is using huge pages, it might also be inaccurate (I don't know). You can look at /proc/<pid>/statm to see what PETSc is reading (second field, which is number of pages in RSS). You can also look at the VmRss field in /proc/<pid>/status, which
reads in kB. See also the HugetlbPages field in /proc/<pid>/status.<br>
<br>
<a href="https://urldefense.us/v3/__https://www.kernel.org/doc/Documentation/filesystems/proc.txt__;!!G_uCfscf7eWS!cjjzIqrR0_JSzQFZMrxX9GzpJEPHSN5oVeNexSd2AKNVhVFmsrJy-sKYRd0VFTzEk1LB727T1dFbrhKJ208CIzji9_k$" target="_blank">https://www.kernel.org/doc/Documentation/filesystems/proc.txt</a><br>
<br>
If your app is swapping, these will be inaccurate because swapped memory is not resident. We don't use the first field (VmSize) because there are reasons why programs sometimes map much more memory than they'll actually use, making such numbers irrelevant for
most purposes.<br>
<br>
><br>
><br>
> PetscMemoryGetMaximumUsage<br>
> PetscMallocGetMaximumUsage<br>
> Time<br>
> Serial + Option 1<br>
> 4.8 GB<br>
> 7.4 GB<br>
> 112 sec<br>
> 2 core + Option1<br>
> 15.2 GB<br>
> 45.5 GB<br>
> 150 sec<br>
> Serial + Option 2<br>
> 3.1 GB<br>
> 3.8 GB<br>
> 167 sec<br>
> 2 core + Option2<br>
> 13.1 GB<br>
> 17.4 GB<br>
> 89 sec<br>
> Serial + Option 3<br>
> 4.7GB<br>
> 5.2GB<br>
> 693 sec<br>
> 2 core + Option 3<br>
> 23.2 GB<br>
> 26.4 GB<br>
> 411 sec<br>
><br>
><br>
> On Thu, Apr 18, 2024 at 4:13 PM Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
><br>
>> The next thing you might try is not using the null space argument.<br>
>> Hypre does not use it, but GAMG does.<br>
>> You could also run with -malloc_view to see some info on mallocs. It is<br>
>> probably in the Mat objects.<br>
>> You can also run with "-info" and grep on GAMG in the output and send that.<br>
>><br>
>> Mark<br>
>><br>
>> On Thu, Apr 18, 2024 at 12:03 PM Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>><br>
>> wrote:<br>
>><br>
>>> Hi Mark,<br>
>>><br>
>>> Thanks for your response and suggestion. With hypre both memory and time<br>
>>> looks good, here is the data for that<br>
>>><br>
>>> PetscMemoryGetMaximumUsage<br>
>>> PetscMallocGetMaximumUsage<br>
>>> Time<br>
>>> Serial + Option 4<br>
>>> 5.55 GB<br>
>>> 5.17 GB<br>
>>> 15.7 sec<br>
>>> 2 core + Option 4<br>
>>> 5.85 GB<br>
>>> 4.69 GB<br>
>>> 21.9 sec<br>
>>><br>
>>> Option 4<br>
>>> mpirun -n _ ./ex1 -A_name matrix.dat -b_name vector.dat -n_name<br>
>>> _null_space.dat -num_near_nullspace 6 -ksp_type cg -pc_type hypre<br>
>>> -pc_hypre_boomeramg_strong_threshold 0.9 -ksp_view -log_view<br>
>>> -log_view_memory -info :pc<br>
>>><br>
>>> I am also attaching a standalone program to reproduce these options and<br>
>>> the link to matrix, rhs and near null spaces (serial.tar 2.xz<br>
>>> <<a></a><a href="https://urldefense.us/v3/__https://ansys-my.sharepoint.com/:u:/p/ashish_patel/EbUM5Ahp-epNi4xDxR9mnN0B1dceuVzGhVXQQYJzI5Py2g__;!!G_uCfscf7eWS!ar7t_MsQ-W6SXcDyEWpSDZP_YngFSqVsz2D-8chGJHSz7IZzkLBvN4UpJ1GXrRBGyhEHqmDUQGBfqTKf5x_BPXo$" target="_blank">https://urldefense.us/v3/__https://ansys-my.sharepoint.com/:u:/p/ashish_patel/EbUM5Ahp-epNi4xDxR9mnN0B1dceuVzGhVXQQYJzI5Py2g__;!!G_uCfscf7eWS!ar7t_MsQ-W6SXcDyEWpSDZP_YngFSqVsz2D-8chGJHSz7IZzkLBvN4UpJ1GXrRBGyhEHqmDUQGBfqTKf5x_BPXo$</a>
><br>
>>> ) if you would like to try as well. Please let me know if you have<br>
>>> trouble accessing the link.<br>
>>><br>
>>> Ashish<br>
>>> ------------------------------<br>
>>> *From:* Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>
>>> *Sent:* Wednesday, April 17, 2024 7:52 PM<br>
>>> *To:* Jeremy Theler (External) <<a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>><br>
>>> *Cc:* Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>>; Scott McClennan <<br>
>>> <a href="mailto:scott.mcclennan@ansys.com" target="_blank">scott.mcclennan@ansys.com</a>><br>
>>> *Subject:* Re: About recent changes in GAMG<br>
>>><br>
>>><br>
>>> *[External Sender]*<br>
>>><br>
>>><br>
>>> On Wed, Apr 17, 2024 at 7:20 AM Jeremy Theler (External) <<br>
>>> <a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>> wrote:<br>
>>><br>
>>> Hey Mark. Long time no see! How are thing going over there?<br>
>>><br>
>>> We are using PETSc main and have found a case where memory consumption<br>
>>> explodes in parallel.<br>
>>> Also, we see a non-negligible difference between<br>
>>> PetscMemoryGetMaximumUsage() and PetscMallocGetMaximumUsage().<br>
>>> Running in serial through /usr/bin/time, the max. resident set size<br>
>>> matches the PetscMallocGetMaximumUsage() result.<br>
>>> I would have expected it to match PetscMemoryGetMaximumUsage() instead.<br>
>>><br>
>>><br>
>>> Yea, my interpretation of these methods is also that "Memory" should be<br>
>>> >= "Malloc". But you are seeing the opposite.<br>
>>><br>
>>> I don't have any idea what is going on with your big memory penalty going<br>
>>> from 1 to 2 cores on this test, but the first thing to do is try other<br>
>>> solvers and see how that behaves. Hypre in particular would be a good thing<br>
>>> to try because it is a similar algorithm.<br>
>>><br>
>>> Mark<br>
>>><br>
>>><br>
>>><br>
>>> The matrix size is around 1 million. We can share it with you if you<br>
>>> want, along with the RHS and the 6 near nullspace vectors and a modified<br>
>>> ex1.c which will read these files and show the following behavior.<br>
>>><br>
>>> Observations using latest main for elastic matrix with a block size of 3<br>
>>> (after removing bonded glue-like DOFs with direct elimination) and near<br>
>>> null space provided<br>
>>><br>
>>> - Big memory penalty going from serial to parallel (2 core)<br>
>>> - Big difference between PetscMemoryGetMaximumUsage and<br>
>>> PetscMallocGetMaximumUsage, why?<br>
>>> - The memory penalty decreases with -pc_gamg_aggressive_square_graph false<br>
>>> (option 2)<br>
>>> - The difference between PetscMemoryGetMaximumUsage and<br>
>>> PetscMallocGetMaximumUsage reduces when -pc_gamg_threshold is<br>
>>> increased from 0 to 0.01 (option 3), the solve time increase a lot though.<br>
>>><br>
>>><br>
>>><br>
>>><br>
>>><br>
>>> PetscMemoryGetMaximumUsage<br>
>>> PetscMallocGetMaximumUsage<br>
>>> Time<br>
>>> Serial + Option 1<br>
>>> 4.8 GB<br>
>>> 7.4 GB<br>
>>> 112 sec<br>
>>> 2 core + Option1<br>
>>> 15.2 GB<br>
>>> 45.5 GB<br>
>>> 150 sec<br>
>>> Serial + Option 2<br>
>>> 3.1 GB<br>
>>> 3.8 GB<br>
>>> 167 sec<br>
>>> 2 core + Option2<br>
>>> 13.1 GB<br>
>>> 17.4 GB<br>
>>> 89 sec<br>
>>> Serial + Option 3<br>
>>> 4.7GB<br>
>>> 5.2GB<br>
>>> 693 sec<br>
>>> 2 core + Option 3<br>
>>> 23.2 GB<br>
>>> 26.4 GB<br>
>>> 411 sec<br>
>>><br>
>>> Option 1<br>
>>> mpirun -n _ ./ex1 -A_name matrix.dat -b_name vector.dat -n_name<br>
>>> _null_space.dat -num_near_nullspace 6 -ksp_type cg -pc_type gamg<br>
>>> -pc_gamg_coarse_eq_limit 1000 -ksp_view -log_view -log_view_memory<br>
>>> -pc_gamg_aggressive_square_graph true -pc_gamg_threshold 0.0 -info :pc<br>
>>><br>
>>> Option 2<br>
>>> mpirun -n _ ./ex1 -A_name matrix.dat -b_name vector.dat -n_name<br>
>>> _null_space.dat -num_near_nullspace 6 -ksp_type cg -pc_type gamg<br>
>>> -pc_gamg_coarse_eq_limit 1000 -ksp_view -log_view -log_view_memory<br>
>>> -pc_gamg_aggressive_square_graph *false* -pc_gamg_threshold 0.0 -info :pc<br>
>>><br>
>>> Option 3<br>
>>> mpirun -n _ ./ex1 -A_name matrix.dat -b_name vector.dat -n_name<br>
>>> _null_space.dat -num_near_nullspace 6 -ksp_type cg -pc_type gamg<br>
>>> -pc_gamg_coarse_eq_limit 1000 -ksp_view -log_view -log_view_memory<br>
>>> -pc_gamg_aggressive_square_graph true -pc_gamg_threshold *0.01* -info :pc<br>
>>> ------------------------------<br>
>>> *From:* Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>
>>> *Sent:* Tuesday, November 14, 2023 11:28 AM<br>
>>> *To:* Jeremy Theler (External) <<a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>><br>
>>> *Cc:* Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>><br>
>>> *Subject:* Re: About recent changes in GAMG<br>
>>><br>
>>><br>
>>> *[External Sender]*<br>
>>> Sounds good,<br>
>>><br>
>>> I think the not-square graph "aggressive" coarsening is only issue that I<br>
>>> see and you can fix this by using:<br>
>>><br>
>>> -mat_coarsen_type mis<br>
>>><br>
>>> Aside, '-pc_gamg_aggressive_square_graph' should do it also, and you can<br>
>>> use both and they will be ignored in earlier versions.<br>
>>><br>
>>> If you see a difference then the first thing to do is run with '-info<br>
>>> :pc' and send that to me (you can grep on 'GAMG' and send that if you like<br>
>>> to reduce the data).<br>
>>><br>
>>> Thanks,<br>
>>> Mark<br>
>>><br>
>>><br>
>>> On Tue, Nov 14, 2023 at 8:49 AM Jeremy Theler (External) <<br>
>>> <a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>> wrote:<br>
>>><br>
>>> Hi Mark.<br>
>>> Thanks for reaching out. For now, we are going to stick to 3.19 for our<br>
>>> production code because the changes in 3.20 impact in our tests in<br>
>>> different ways (some of them perform better, some perform worse).<br>
>>> I now switched to another task about investigating structural elements in<br>
>>> DMplex.<br>
>>> I'll go back to analyzing the new changes in GAMG in a couple of weeks so<br>
>>> we can then see if we upgrade to 3.20 or we wait until 3.21.<br>
>>><br>
>>> Thanks for your work and your kindness.<br>
>>> --<br>
>>> jeremy<br>
>>> ------------------------------<br>
>>> *From:* Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>
>>> *Sent:* Tuesday, November 14, 2023 9:35 AM<br>
>>> *To:* Jeremy Theler (External) <<a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>><br>
>>> *Cc:* Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>><br>
>>> *Subject:* Re: About recent changes in GAMG<br>
>>><br>
>>><br>
>>> *[External Sender]*<br>
>>> Hi Jeremy,<br>
>>><br>
>>> Just following up.<br>
>>> I appreciate your digging into performance regressions in GAMG.<br>
>>> AMG is really a pain sometimes and we want GAMG to be solid, at least for<br>
>>> mainstream options, and your efforts are appreciated.<br>
>>> So feel free to start this discussion up.<br>
>>><br>
>>> Thanks,<br>
>>> Mark<br>
>>><br>
>>> On Wed, Oct 25, 2023 at 9:52 PM Jeremy Theler (External) <<br>
>>> <a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>> wrote:<br>
>>><br>
>>> Dear Mark<br>
>>><br>
>>> Thanks for the follow up and sorry for the delay.<br>
>>> I'm taking some days off. I'll be back to full throttle next week so can<br>
>>> continue the discussion about these changes in GAMG.<br>
>>><br>
>>> Regards,<br>
>>> Jeremy<br>
>>><br>
>>> ------------------------------<br>
>>> *From:* Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>><br>
>>> *Sent:* Wednesday, October 18, 2023 9:15 AM<br>
>>> *To:* Jeremy Theler (External) <<a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>>; PETSc<br>
>>> users list <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>><br>
>>> *Cc:* Ashish Patel <<a href="mailto:ashish.patel@ansys.com" target="_blank">ashish.patel@ansys.com</a>><br>
>>> *Subject:* Re: About recent changes in GAMG<br>
>>><br>
>>><br>
>>> *[External Sender]*<br>
>>> Hi Jeremy,<br>
>>><br>
>>> I hope you don't mind putting this on the list (w/o data), but this is<br>
>>> documentation and you are the second user that found regressions.<br>
>>> Sorry for the churn.<br>
>>><br>
>>> There is a lot here so we can iterate, but here is a pass at your<br>
>>> questions.<br>
>>><br>
>>> *** Using MIS-2 instead of square graph was motivated by setup<br>
>>> cost/performance but on GPUs with some recent fixes in Kokkos (in a branch)<br>
>>> square graph seems OK.<br>
>>> My experience was that square graph is better in terms of quality and we<br>
>>> have a power user, like you all, that found this also.<br>
>>> So I switched the default back to square graph.<br>
>>><br>
>>> Interesting that you found that MIS-2 (new method) could be faster, but<br>
>>> it might be because the two methods coarsen at different rates and that can<br>
>>> make a big difference.<br>
>>> (the way to test would be to adjust parameters to get similar coarsen<br>
>>> rates, but I digress)<br>
>>> It's hard to understand the differences between these two methods in<br>
>>> terms of aggregate quality so we need to just experiment and have options.<br>
>>><br>
>>> *** As far as your thermal problem. There was a complaint that the eigen<br>
>>> estimates for chebyshev smoother were not recomputed for nonlinear problems<br>
>>> and I added an option to do that and turned it on by default:<br>
>>> Use '-pc_gamg_recompute_esteig false' to get back to the original.<br>
>>> (I should have turned it off by default)<br>
>>><br>
>>> Now, if your problem is symmetric and you use CG to compute the eigen<br>
>>> estimates there should be no difference.<br>
>>> If you use CG to compute the eigen estimates in GAMG (and have GAMG give<br>
>>> them to cheby, the default) that when you recompute the eigen estimates the<br>
>>> cheby eigen estimator is used and that will use gmres by default unless you<br>
>>> set the SPD property in your matrix.<br>
>>> So if you set '-pc_gamg_esteig_ksp_type cg' you want to also set<br>
>>> '-mg_levels_esteig_ksp_type cg' (verify with -ksp_view and -options_left)<br>
>>> CG is a much better estimator for SPD.<br>
>>><br>
>>> And I found that the cheby eigen estimator uses an LAPACK *eigen* method<br>
>>> to compute the eigen bounds and GAMG uses a *singular value* method.<br>
>>> The two give very different results on the lid driven cavity test (ex19).<br>
>>> eigen is lower, which is safer but not optimal if it is too low.<br>
>>> I have a branch to have cheby use the singular value method, but I don't<br>
>>> plan on merging it (enough churn and I don't understand these differences).<br>
>>><br>
>>> *** '-pc_gamg_low_memory_threshold_filter false' recovers the old<br>
>>> filtering method.<br>
>>> This is the default now because there is a bug in the (new) low memory<br>
>>> filter.<br>
>>> This bug is very rare and catastrophic.<br>
>>> We are working on it and will turn it on by default when it's fixed.<br>
>>> This does not affect the semantics of the solver, just work and memory<br>
>>> complexity.<br>
>>><br>
>>> *** As far as tet4 vs tet10, I would guess that tet4 wants more<br>
>>> aggressive coarsening.<br>
>>> The default is to do aggressive on one (1) level.<br>
>>> You might want more levels for tet4.<br>
>>> And the new MIS-k coarsening can use any k (default is 2) wth<br>
>>> '-mat_coarsen_misk_distance k' (eg, k=3)<br>
>>> I have not added hooks to have a more complex schedule to specify the<br>
>>> method on each level.<br>
>>><br>
>>> Thanks,<br>
>>> Mark<br>
>>><br>
>>> On Tue, Oct 17, 2023 at 9:33 PM Jeremy Theler (External) <<br>
>>> <a href="mailto:jeremy.theler-ext@ansys.com" target="_blank">jeremy.theler-ext@ansys.com</a>> wrote:<br>
>>><br>
>>> Hey Mark<br>
>>><br>
>>> Regarding the changes in the coarsening algorithm in 3.20 with respect to<br>
>>> 3.19 in general we see that for some problems the MIS strategy gives and<br>
>>> overall performance which is slightly better and for some others it is<br>
>>> slightly worse than the "baseline" from 3.19.<br>
>>> We also saw that current main has switched back to the old square<br>
>>> coarsening algorithm by default, which again, in some cases is better and<br>
>>> in others is worse than 3.19 without any extra command-line option.<br>
>>><br>
>>> Now what seems weird to us is that we have a test case which is a heat<br>
>>> conduction problem with radiation boundary conditions (so it is non linear)<br>
>>> using tet10 and we see<br>
>>><br>
>>> 1. that in parallel v3.20 is way worse than v3.19, although the<br>
>>> memory usage is similar<br>
>>> 2. that petsc main (with no extra flags, just the defaults) recover<br>
>>> the 3.19 performance but memory usage is significantly larger<br>
>>><br>
>>><br>
>>> I tried using the -pc_gamg_low_memory_threshold_filter flag and the<br>
>>> results were the same.<br>
>>><br>
>>> Find attached the log and snes views of 3.19, 3.20 and main using 4 MPI<br>
>>> ranks.<br>
>>> Is there any explanation about these two points we are seeing?<br>
>>> Another weird finding is that if we use tet4 instead of tet10, v3.20 is<br>
>>> only 10% slower than the other two and main does not need more memory than<br>
>>> the other two.<br>
>>><br>
>>> BTW, I have dozens of other log view outputs comparing 3.19, 3.20 and<br>
>>> main should you be interested.<br>
>>><br>
>>> Let me know if it is better to move this discussion into the PETSc<br>
>>> mailing list.<br>
>>><br>
>>> Regards,<br>
>>> jeremy theler<br>
>>><br>
>>><br>
>>><br>
</div>
</span></font></div>
</div>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
<span>-- </span><br>
<div dir="ltr">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!ZWpgqTYPfSh8baj8FyLvMAxZoaznrIYb4OyREM2ueW18-7n5FaVAgrRZmeRu602ugDbFhVs-wMao9C3WLCz-I1s$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div></blockquote></div></div>