[petsc-users] Memory usage in SLEPc eigensolver
Miroslav Kuchta
miroslav.kuchta at gmail.com
Tue Jun 11 08:24:34 CDT 2024
Dear mailing list,
I have a question regarding memory usage in SLEPc. Specifically, I am
running out of memory when solving a generalized
eigenvalue problem Ax = alpha Mx. Here M is singular so we set the problem
type as GNHEP and solve with Krylov-Schur
method and shift-and-invert spectral transform. The matrix A come from
Stokes like problem so the transform is set to use
a block diagonal preconditioner B where each of the blocks (through
fieldsplit) uses hypre. The solver works nicely on smaller
problems in 3d (with about 100K dofs). However, upon further refinement the
system size gets to millions of dofs and we run
out of memory (>150GB). I find it surprising because KSP(A, B) on the same
machine works without issues. When running
with "-log_trace -info" I see that the memory requests before the job is
killed come from the preconditioner setup
*[0] PCSetUp(): Setting up PC for first time[0] MatConvert(): Check
superclass seqhypre mpiaij -> 0[0] MatConvert(): Check superclass mpihypre
mpiaij -> 0[0] MatConvert(): Check specialized (1)
MatConvert_mpiaij_seqhypre_C (mpiaij) -> 0[0] MatConvert(): Check
specialized (1) MatConvert_mpiaij_mpihypre_C (mpiaij) -> 0*
*[0] MatConvert(): Check specialized (1) MatConvert_mpiaij_hypre_C (mpiaij)
-> 1*
Interestingly, when solving just the problem Ax = b with B as
preconditioner I don't see any calls like the above. We can get access
to a larger machine but I am curious if our setup/solution strategy can be
improved/optimized. Do you have any advice on how to
reduce the memory footprint?
Thanks and best regards, Miro
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240611/9e49dd63/attachment.html>
More information about the petsc-users
mailing list