<div dir="ltr"><div dir="ltr">On Thu, Sep 5, 2024 at 1:40 PM Corbijn van Willenswaard, Lars (UT) via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Dear PETSc,<br>
<br>
For the last months I’ve struggled with a solver that I wrote for a FEM eigenvalue problem running out of memory. I’ve traced it to KSPSolve + MUMPS being the issue, but I'm getting stuck on digging deeper.<br>
<br>
The reason I suspect the KSPSolve/MUMPS is that when commenting out the KSPSolve the memory stays constant while running the rest of the algorithm. Of course, the algorithm also converges to a different result in this setup. When changing the KSP statement to <br>
for(int i = 0; i < 100000000; i++) KSPSolve(A_, vec1_, vec2_);<br>
the memory grows faster than when running the algorithm. Logging shows that the program never the terminating i=100M. Measuring the memory growth using ps (started debugging before I knew of PETSc's features) I see a growth in the RSS on a single compute node of up to 300MB/min for this artificial case. Real cases grow more like 60MB/min/node, which causes a kill due to memory exhaustion after about 2-3 days.<br>
<br>
Locally (Mac) I've been able to reproduce this both with 6 MPI processes and with a single one. Instrumenting the code to show differences in PetscMemoryGetCurrentUsage (full code below), shows that the memory increases every step at the start, but also does at later iterations (small excerpt from the output):<br>
rank step memory (increase since prev step)<br>
0 6544 current 39469056( 8192)<br>
0 7086 current 39477248( 8192)<br>
0 7735 current 39497728( 20480)<br>
0 9029 current 39501824( 4096)<br>
A similar output is visible in a run with 6 ranks, where there does not seem to be a pattern as to which of the ranks increases at which step. (Note I've checked PetscMallocGetCurrentUsage, but that is constant)<br>
<br>
Switching the solver to petsc's own solver on a single rank does not show a memory increase after the first solve. Changing the solve to overwrite the vector will result in a few increases after the first solve, but these do not seem to repeat. So, changes like VecCopy(vec2, vec1_); KSPSolve(A_, vec1_, vec1_);.<br>
<br>
Does anyone have an idea on how to further dig into this problem?<br></blockquote><div><br></div><div>I think the best way is to construct the simplest code that reproduces your problem. For example, we could save your matrix in a binary file</div><div><br></div><div> -ksp_view_mat binary:mat.bin</div><div><br></div><div>and then use a very simple code:</div><div><br></div><div>#include <petsc.h><br><br>int main(int argc, char **argv)<br>{<br> PetscViewer viewer;<br> Mat A;<br> Vec b, x;<br><br> PetscCall(PetscInitialize(&argc, &argv, NULL, NULL));<br> PetscCall(PetscViewerBinaryOpen(PETSC_COMM_WORLD, "mat.bin", PETSC_MODE_READ, &viewer));<br> PetscCall(MatLoad(A, viewer));<br> PetscCall(PetscViewerDestroy(&viewer));<br> PetscCall(MatCreateVecs(A, &x, &b));<br> PetscCall(VecSet(b, 1.));<br><br> PetscCall(KSPCreate(PETSC_COMM_WORLD, &ksp));<br> PetscCall(KSPSetOperators(ksp, A, A));<br> PetscCall(KSPSetFromOptions(ksp));<br> for (PetscInt i = 0; i < 100000; ++i) PetscCall(KSPSolve(ksp, b, x));<br> PetscCall(KSPDestroy(&ksp));<br><br> PetscCall(MatDestroy(&A));<br> PetscCall(VecDestroy(&b));<br> PetscCall(VecDestroy(&x));<br> PetscCall(PetscFinalize());<br> return(0);<br>}<br></div><div><br></div><div>and see if you get memory increase.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Kind regards,<br>
Lars Corbijn<br>
<br>
<br>
Instrumentation:<br>
<br>
PetscLogDouble lastCurrent, current;<br>
int rank;<br>
MPI_Comm_rank(PETSC_COMM_WORLD, &rank);<br>
for(int i = 0; i < 100000000; ++i) {<br>
PetscMemoryGetCurrentUsage(&lastCurrent);<br>
KSPSolve(A_, vec1_, vec2_);<br>
PetscMemoryGetCurrentUsage(¤t);<br>
if(current != lastCurrent) {<br>
std::cout << std::setw(2) << rank << " " << std::setw(6) << i<br>
<< " current " << std::setw(8) << (int) current << std::right<br>
<< "(" << std::setw(6) << (int)(current - lastCurrent) << ")"<br>
<< std::endl;<br>
}<br>
lastCurrent = current;<br>
}<br>
<br>
<br>
Matrix details<br>
The matrix A in question is created from a complex valued matrix C_ (type mataij) using the following code (modulo renames). Theoretically it should be a Laplacian with phase-shift periodic boundary conditions<br>
MatHermitianTranspose(C_, MAT_INITIAL_MATRIX, &Y_);<br>
MatProductCreate(C_, Y_, NULL, & A_);<br>
MatProductSetType(A_, MATPRODUCT_AB);<br>
MatProductSetFromOptions(A_);<br>
MatProductSymbolic(A_);<br>
MatProductNumeric(A_);<br>
<br>
Petsc arguments: -log_view_memory -log_view :petsc.log -ksp_type preonly -pc_type lu -pc_factor_mat_solver_type mumps -bv_matmult vecs -memory_view<br>
<br>
</blockquote></div><br clear="all"><div><br></div><span class="gmail_signature_prefix">-- </span><br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="https://urldefense.us/v3/__http://www.cse.buffalo.edu/*knepley/__;fg!!G_uCfscf7eWS!aOL-SsOXXPOKEgMX-Lf2zBM59ObfrSKQMpa_bh8gN0Lwam9yu6vKE1ZBQW3nM26gVAYbVnMsu_zXgpJ4TMFB$" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>