[petsc-users] mumps running out of memory, depending on an overall numerical factor?

Dominic Meiser dmeiser at txcorp.com
Wed Feb 5 16:15:57 CST 2014


On Wed 05 Feb 2014 02:58:29 PM MST, Jed Brown wrote:
> Dominic Meiser <dmeiser at txcorp.com> writes:
>> This approach has worked fairly well for me. I have a workstation with
>> 32GB of memory and 500GB on two SSD's in raid 0 configuration. The
>> out-of-core files for the matrix I was trying to factor are about 300GB
>> and the numerical factorization takes approximately 4hours. No idea how
>> this compares to the performance one would get on a workstation that
>> can fit the factors in ram. Perhaps not too big of a difference during
>> the factorization but a faster solve?
>
> Run it on 5 nodes of Edison and see.  I bet efficiency is pretty similar
> during both factorization and solve.  If your matrix is big enough to
> fill memory, all the dense operations should scale well.  (Dense LA
> parallelism is hard for small problem sizes.)

I don't doubt that. In fact I did run this same problem on clusters as 
well. It's just that some users (or customers) don't have access to 
clusters or don't want to deal with them. Sometimes this is for 
non-technical reasons. In such cases it's nice to have the option of 
doing out-of-core.

--
Dominic Meiser
Tech-X Corporation
5621 Arapahoe Avenue
Boulder, CO 80303
USA
Telephone: 303-996-2036
Fax: 303-448-7756
www.txcorp.com


More information about the petsc-users mailing list