[petsc-users] mumps running out of memory, depending on an overall numerical factor?

Jed Brown jed at jedbrown.org
Wed Feb 5 15:58:29 CST 2014


Dominic Meiser <dmeiser at txcorp.com> writes:
> This approach has worked fairly well for me. I have a workstation with 
> 32GB of memory and 500GB on two SSD's in raid 0 configuration. The 
> out-of-core files for the matrix I was trying to factor are about 300GB 
> and the numerical factorization takes approximately 4hours. No idea how 
> this compares to the performance one would get on a workstation that 
> can fit the factors in ram. Perhaps not too big of a difference during 
> the factorization but a faster solve?

Run it on 5 nodes of Edison and see.  I bet efficiency is pretty similar
during both factorization and solve.  If your matrix is big enough to
fill memory, all the dense operations should scale well.  (Dense LA
parallelism is hard for small problem sizes.)
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 835 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140205/28e6e7af/attachment.pgp>


More information about the petsc-users mailing list