[petsc-users] Memory usage in SLEPc eigensolver

Barry Smith bsmith at petsc.dev
Tue Jun 11 08:43:38 CDT 2024


   You can run with -log_view -log_view_memory and it will display rich information about in which event the memory is allocated and how much. 
There are several columns of information and the notes displayed explain how to interpret each column. Feel fro the post the output and ask questions about
the information displayed since it is a big confusing.

  Barry


> On Jun 11, 2024, at 9:24 AM, Miroslav Kuchta <miroslav.kuchta at gmail.com> wrote:
> 
> This Message Is From an External Sender
> This message came from outside your organization.
> Dear mailing list, 
> 
> I have a question regarding memory usage in SLEPc. Specifically, I am running out of memory when solving a generalized 
> eigenvalue problem Ax = alpha Mx. Here M is singular so we set the problem type as GNHEP and solve with Krylov-Schur 
> method and shift-and-invert spectral transform. The matrix A come from Stokes like problem so the transform is set to use 
> a block diagonal preconditioner B where each of the blocks (through fieldsplit) uses hypre. The solver works nicely on smaller 
> problems in 3d (with about 100K dofs). However, upon further refinement the system size gets to millions of dofs and we run 
> out of memory (>150GB). I find it surprising because KSP(A, B) on the same machine works without issues. When running 
> with "-log_trace -info" I see that the memory requests before the job is killed come from the preconditioner setup
> 
> [0] PCSetUp(): Setting up PC for first time
> [0] MatConvert(): Check superclass seqhypre mpiaij -> 0
> [0] MatConvert(): Check superclass mpihypre mpiaij -> 0
> [0] MatConvert(): Check specialized (1) MatConvert_mpiaij_seqhypre_C (mpiaij) -> 0
> [0] MatConvert(): Check specialized (1) MatConvert_mpiaij_mpihypre_C (mpiaij) -> 0
> [0] MatConvert(): Check specialized (1) MatConvert_mpiaij_hypre_C (mpiaij) -> 1
> 
> Interestingly, when solving just the problem Ax = b with B as preconditioner I don't see any calls like the above. We can get access 
> to a larger machine but I am curious if our setup/solution strategy can be improved/optimized. Do you have any advice on how to
> reduce the memory footprint? 
> 
> Thanks and best regards, Miro

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240611/99fd9710/attachment-0001.html>


More information about the petsc-users mailing list