<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">Garth,<br>
<br>
At the time I was tested PaStiX it failed for my problem:<br>
<a class="moz-txt-link-freetext" href="https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2011-December/006887.html">https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2011-December/006887.html</a><br>
<br>
Since then PaStiX has been updated with several critical bug
fixes, so I should consider testing new version. <br>
<br>
The memory scalability of the MUMPS is not nice, that is true. <br>
Running MUMPS with default parameters on large amount of cores is
often not optimal. I don't how much you spent tweaking parameters.
<br>
MUMPS is among the most robust distributed solvers nowadays and it
is still being developed and hopefully will improve. <br>
<br>
<i><b>To petsc developers:</b> </i>are there plans to update
PaStiX supplied with PETSc? The current version is 5.2 from
2012-06-08 and PETSc-3.3-p3 uses 5.1.8 from 2011-02-23.<br>
<br>
Here is changelog:<br>
<a class="moz-txt-link-freetext" href="https://gforge.inria.fr/frs/shownotes.php?group_id=186&release_id=7096">https://gforge.inria.fr/frs/shownotes.php?group_id=186&release_id=7096</a><br>
<br>
On 09.11.2012 19:40, Garth N. Wells wrote:<br>
</div>
<blockquote
cite="mid:CAA4C66PGWXNHKNrL_NrvbqrXBXCnGTgt1pjH5iR+VsOcFvxY2A@mail.gmail.com"
type="cite">
<pre wrap="">I've only just joined the petsc-dev list, but I'm hoping with this
subject line my email will join the right thread . . . . (related to
MUMPS).
I've been experimenting over the past year with MUMPS and PaStiX for
parallel LU, and found MUMPS pretty much useless because it uses so
much memory. PaStiX was vastly superior performance-wise and it
supports hybrid threads-MPI, which I think is essential for parallel
LU solvers to make good use of typical multi-socket multi-core compute
nodes. The interface, build and documentation are a bit clunky (I put
the last point down to developer language issues), but the performance
is good and the developers are responsive. I benchmarked PaStiX for P1
and P2 3D linear elastic finite element problems against a leading
commercial offering, and PaStiX was marginally faster for P1 and
marginally slower for P2 (PaStiX performance does depend heavily on
BLAS). I couldn't even compute the test problems with MUMPS because it
would blow out the memory. For reference, I tested systems up to 27M
dofs with PaStiX.
Based on my experience and tests, I'd be happy to see PETSc drop MUMPS
and focus/enhance/fix support for PaStiX.
Garth
</pre>
</blockquote>
<br>
<br>
<pre class="moz-signature" cols="72">--
Regards,
Alexander</pre>
</body>
</html>