<html>
<head>
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">So, I have tested both PaStiX and MUMPS
solvers. Tests were run on 4 inifinibanded nodes, each equipped
with two 12 core AMD Opteron and 64 GB RAM. Intel Compiler 11.1 +
MKL + OpenMPI was the tool-chain. <br>
<br>
The problem is 3D Helmholtz equation, 1.4 Mio of unknowns. The
matrix is symmetric thus I used LDL^T for both. <br>
First of all, both PaStiX and MUMPS gave correct solution with the
relative residual < 1e-12, although the test case was not
numerically difficult. <br>
<br>
Below are tables, showing time for analysis+factorization
(seconds) and overall memory usage (megabytes). <br>
<br>
PASTIX: <br>
N_cpus T_fac memory<br>
1 9.27E+03 27900<br>
4 5.28E+03 33200<br>
16 1.44E+03 77700<br>
32 755 131377<br>
64 471 225399<br>
<br>
MUMPS:<br>
N_cpus T_fac memory<br>
1 8009 49689<br>
4 2821 63501<br>
16 1375 84115<br>
32 1081 86583<br>
64 733 98235<br>
<br>
According to this test, PaStiX is slightly faster when run on more
cores, but also consumes much more memory. Which is opposite to
what Garth said. Either I did something wrong or our matrices are
very different. <br>
<br>
PS Can anyone explain why direct solvers require more memory when
run in parallel?<br>
<br>
On 10.11.2012 14:14, Alexander Grayver wrote:<br>
</div>
<blockquote cite="mid:509E5344.2000307@gfz-potsdam.de" type="cite">
<meta content="text/html; charset=UTF-8" http-equiv="Content-Type">
<div class="moz-cite-prefix">Garth,<br>
<br>
At the time I was tested PaStiX it failed for my problem:<br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2011-December/006887.html">https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2011-December/006887.html</a><br>
<br>
Since then PaStiX has been updated with several critical bug
fixes, so I should consider testing new version. <br>
<br>
The memory scalability of the MUMPS is not nice, that is true. <br>
Running MUMPS with default parameters on large amount of cores
is often not optimal. I don't how much you spent tweaking
parameters. <br>
MUMPS is among the most robust distributed solvers nowadays and
it is still being developed and hopefully will improve. <br>
<br>
<i><b>To petsc developers:</b> </i>are there plans to update
PaStiX supplied with PETSc? The current version is 5.2 from
2012-06-08 and PETSc-3.3-p3 uses 5.1.8 from 2011-02-23.<br>
<br>
Here is changelog:<br>
<a moz-do-not-send="true" class="moz-txt-link-freetext"
href="https://gforge.inria.fr/frs/shownotes.php?group_id=186&release_id=7096">https://gforge.inria.fr/frs/shownotes.php?group_id=186&release_id=7096</a><br>
<br>
</div>
</blockquote>
<pre class="moz-signature" cols="72">--
Regards,
Alexander</pre>
</body>
</html>