[petsc-users] mumps running out of memory, depending on an overall numerical factor?

David Liu daveliu at mit.edu
Sat Feb 1 13:16:42 CST 2014


I see. Does that s^2 memory scaling mean that sparse direct solvers are not
meant to be used beyond a certain point? I.e. if the supercomputer I'm
using doesn't have enough memory per core to store even a single row of the
factored matrix, then I'm out of luck?


On Fri, Jan 31, 2014 at 9:51 PM, Jed Brown <jed at jedbrown.org> wrote:

> David Liu <daveliu at mit.edu> writes:
>
> > Hi, I'm solving a 3d problem with mumps. When I increased the grid size
> to
> > 70x60x20 with 6 unknowns per point, I started noticing that the program
> was
> > crashing at runtime at the factoring stage, with the mumps error code:
> >
> > -17 The internal send buffer that was allocated dynamically by MUMPS on
> the
> > processor is too small.
> > The user should increase the value of ICNTL(14) before calling MUMPS
> again.
> >
> > However, when I increase the grid spacing in the z direction by about
> 50%,
> > this crash does not happen.
> >
> > Why would how much memory an LU factorization uses depend on an overall
> > numerical factor (for part of the matrix at least) like this?
>
> I'm not sure exactly what you're asking, but the complexity of direct
> solves depend on the minimal vertex separators in the sparse
> matrix/graph.  Yours will be s=60*20*6 (more if your stencil needs
> second neighbors).  The memory usage scales with s^2 and the
> factorization time scales with s^3.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140201/0fa7895d/attachment.html>


More information about the petsc-users mailing list