[petsc-users] mumps running out of memory, depending on an overall numerical factor?

Barry Smith bsmith at mcs.anl.gov
Wed Feb 5 16:56:20 CST 2014

   In conclusion:    We’d like to have a  good PETSc interface to a reliable out of core sparse solver, but it is not a priority (i.e. we have no money) for the core PETSc developers to implement such a beast. If an out of core PETSc user :-) would like to contribute one we can answer questions and provide guidance (basically look how the superlu, superlu_dist and mumps interfaces work and imitate them.)



On Feb 5, 2014, at 4:15 PM, Dominic Meiser <dmeiser at txcorp.com> wrote:

> On Wed 05 Feb 2014 02:58:29 PM MST, Jed Brown wrote:
>> Dominic Meiser <dmeiser at txcorp.com> writes:
>>> This approach has worked fairly well for me. I have a workstation with
>>> 32GB of memory and 500GB on two SSD's in raid 0 configuration. The
>>> out-of-core files for the matrix I was trying to factor are about 300GB
>>> and the numerical factorization takes approximately 4hours. No idea how
>>> this compares to the performance one would get on a workstation that
>>> can fit the factors in ram. Perhaps not too big of a difference during
>>> the factorization but a faster solve?
>> Run it on 5 nodes of Edison and see.  I bet efficiency is pretty similar
>> during both factorization and solve.  If your matrix is big enough to
>> fill memory, all the dense operations should scale well.  (Dense LA
>> parallelism is hard for small problem sizes.)
> I don't doubt that. In fact I did run this same problem on clusters as well. It's just that some users (or customers) don't have access to clusters or don't want to deal with them. Sometimes this is for non-technical reasons. In such cases it's nice to have the option of doing out-of-core.
> --
> Dominic Meiser
> Tech-X Corporation
> 5621 Arapahoe Avenue
> Boulder, CO 80303
> Telephone: 303-996-2036
> Fax: 303-448-7756
> www.txcorp.com

More information about the petsc-users mailing list