[petsc-users] mumps running out of memory, depending on an overall numerical factor?

Dominic Meiser dmeiser at txcorp.com
Wed Feb 5 15:09:21 CST 2014


On Wed 05 Feb 2014 01:36:29 PM MST, Barry Smith wrote:
>
> On Feb 5, 2014, at 2:05 PM, Jed Brown <jed at jedbrown.org> wrote:
>
>> Klaus Zimmermann <klaus.zimmermann at physik.uni-freiburg.de> writes:
>>> Isn’t that a bit pessimistic? After all there is the out-of-core
>>> facility with mumps.
>>
>> I'll just note that out-of-core as an algorithmic device is dead on most
>> HPC machines.
>
>     But what about a non-HPC machine? Not everyone has huge machines but how about a well-endowed server quality workstation setup with the best disks available? Put as much physical memory as possible and then use the disks for out of core.
>

This approach has worked fairly well for me. I have a workstation with 
32GB of memory and 500GB on two SSD's in raid 0 configuration. The 
out-of-core files for the matrix I was trying to factor are about 300GB 
and the numerical factorization takes approximately 4hours. No idea how 
this compares to the performance one would get on a workstation that 
can fit the factors in ram. Perhaps not too big of a difference during 
the factorization but a faster solve?

Cheers,
Dominic

>     Barry
>
>
>
>> There are a few machines with fast local SSD, but the
>> majority of HPC machines need about an hour to write the contents of
>> memory to disk.  You can get more memory by running on more cores up to
>> the entire machine.  If that's not enough, current computing awards
>> (e.g,, INCITE) are not large enough to store to disk at full-machine
>> scale more than a few times per year.
>
>
>



--
Dominic Meiser
Tech-X Corporation
5621 Arapahoe Avenue
Boulder, CO 80303
USA
Telephone: 303-996-2036
Fax: 303-448-7756
www.txcorp.com


More information about the petsc-users mailing list