<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body>
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">Matt:<br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div dir="ltr">
<div>Does anyone know how to profile memory usage?</div>
<div></div>
</div>
</div>
</blockquote>
<div><br>
</div>
<div>The best serial way is to use Massif, which is part of valgrind. I think it might work in parallel if you</div>
<div>only look at one process at a time.</div>
</div>
</div>
</blockquote>
<div> </div>
<div>Can you give an example of using Massif?</div>
<div>For example, how to use it on petsc/src/ksp/ksp/examples/tutorials/ex56.c with np=8?</div>
<div>Hong</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div class="gmail_quote">
<div> </div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div dir="ltr">
<div>Hong</div>
<div><br>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">Thanks, Hong,
<div><br>
</div>
<div>I just briefly went through the code. I was wondering if it is possible to destroy "c->ptap" (that caches a lot of intermediate data) to release the memory after the coarse matrix is assembled. I understand you may still want to reuse these data structures
by default but for my simulation, the preconditioner is fixed and there is no reason to keep the "c->ptap". </div>
</div>
</div>
</div>
</blockquote>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div dir="ltr">
<div dir="ltr">
<div dir="ltr">
<div><br>
</div>
<div>It would be great, if we could have this optional functionality.</div>
<div><br>
</div>
<div>Fande Kong,</div>
</div>
</div>
</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Thu, Dec 20, 2018 at 9:45 PM Zhang, Hong <<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div>
<div dir="ltr">
<div dir="ltr">
<div>We use nonscalable implementation as default, and switch to scalable for matrices over finer grids. You may use option '-matptap_via scalable' to force scalable PtAP implementation for all PtAP. Let me know if it works. </div>
<div>Hong</div>
<br>
<div class="gmail_quote">
<div dir="ltr">On Thu, Dec 20, 2018 at 8:16 PM Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
</div>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
See MatPtAP_MPIAIJ_MPIAIJ(). It switches to scalable automatically for "large" problems, which is determined by some heuristic.<br>
<br>
Barry<br>
<br>
<br>
> On Dec 20, 2018, at 6:46 PM, Fande Kong via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> <br>
> <br>
> <br>
> On Thu, Dec 20, 2018 at 4:43 PM Zhang, Hong <<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>> wrote:<br>
> Fande:<br>
> Hong,<br>
> Thanks for your improvements on PtAP that is critical for MG-type algorithms. <br>
> <br>
> On Wed, May 3, 2017 at 10:17 AM Hong <<a href="mailto:hzhang@mcs.anl.gov" target="_blank">hzhang@mcs.anl.gov</a>> wrote:<br>
> Mark,<br>
> Below is the copy of my email sent to you on Feb 27:<br>
> <br>
> I implemented scalable MatPtAP and did comparisons of three implementations using ex56.c on alcf cetus machine (this machine has small memory, 1GB/core):<br>
> - nonscalable PtAP: use an array of length PN to do dense axpy<br>
> - scalable PtAP: do sparse axpy without use of PN array<br>
> <br>
> What PN means here?<br>
> Global number of columns of P. <br>
> <br>
> - hypre PtAP.<br>
> <br>
> The results are attached. Summary:<br>
> - nonscalable PtAP is 2x faster than scalable, 8x faster than hypre PtAP<br>
> - scalable PtAP is 4x faster than hypre PtAP<br>
> - hypre uses less memory (see <a href="http://job.ne399.n63.np1000.sh" rel="noreferrer" target="_blank">
job.ne399.n63.np1000.sh</a>)<br>
> <br>
> I was wondering how much more memory PETSc PtAP uses than hypre? I am implementing an AMG algorithm based on PETSc right now, and it is working well. But we find some a bottleneck with PtAP. For the same P and A, PETSc PtAP fails to generate a coarse matrix
due to out of memory, while hypre still can generates the coarse matrix.<br>
> <br>
> I do not want to just use the HYPRE one because we had to duplicate matrices if I used HYPRE PtAP.<br>
> <br>
> It would be nice if you guys already have done some compassions on these implementations for the memory usage.<br>
> Do you encounter memory issue with scalable PtAP?<br>
> <br>
> By default do we use the scalable PtAP?? Do we have to specify some options to use the scalable version of PtAP? If so, it would be nice to use the scalable version by default. I am totally missing something here.
<br>
> <br>
> Thanks,<br>
> <br>
> Fande<br>
> <br>
> <br>
> Karl had a student in the summer who improved MatPtAP(). Do you use the latest version of petsc?<br>
> HYPRE may use less memory than PETSc because it does not save and reuse the matrices.<br>
> <br>
> I do not understand why generating coarse matrix fails due to out of memory. Do you use direct solver at coarse grid?<br>
> Hong<br>
> <br>
> Based on above observation, I set the default PtAP algorithm as 'nonscalable'. <br>
> When PN > local estimated nonzero of C=PtAP, then switch default to 'scalable'.<br>
> User can overwrite default.<br>
> <br>
> For the case of np=8000, ne=599 (see <a href="http://job.ne599.n500.np8000.sh" rel="noreferrer" target="_blank">
job.ne599.n500.np8000.sh</a>), I get<br>
> MatPtAP 3.6224e+01 (nonscalable for small mats, scalable for larger ones)<br>
> scalable MatPtAP 4.6129e+01<br>
> hypre 1.9389e+02 <br>
> <br>
> This work in on petsc-master. Give it a try. If you encounter any problem, let me know.<br>
> <br>
> Hong<br>
> <br>
> On Wed, May 3, 2017 at 10:01 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>> wrote:<br>
> (Hong), what is the current state of optimizing RAP for scaling?<br>
> <br>
> Nate, is driving 3D elasticity problems at scaling with GAMG and we are working out performance problems. They are hitting problems at ~1.5B dof problems on a basic Cray (XC30 I think).<br>
> <br>
> Thanks,<br>
> Mark<br>
> <br>
<br>
</blockquote>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</blockquote>
</div>
</div>
</div>
</blockquote>
</div>
<br clear="all">
<div><br>
</div>
-- <br>
<div dir="ltr" class="gmail-m_-112187119135301740gmail_signature">
<div dir="ltr">
<div>
<div dir="ltr">
<div>
<div dir="ltr">
<div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener</div>
<div><br>
</div>
<div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</blockquote>
</div>
</div>
</div>
</body>
</html>