[petsc-dev] Fwd: [mumps-dev] support for distributed right-hand vectors?

Hong Zhang hzhang at mcs.anl.gov
Mon Nov 12 14:19:33 CST 2012


Alexander:

Interesting results!
Do you use the same matrix ordering?
The ordering might affect memory and execution time.

The algorithms of direct solver are in general non-scalable, both in-terms of
flops and memory. However, I'm surprised by the rate of memory growth
for both solvers. I would suggest sending your report to the PaStiX
and MUMPS developers who can give better explanations, as well as
provide improved implementations.

Hong

> So, I have tested both PaStiX and MUMPS solvers. Tests were run on 4
> inifinibanded nodes, each equipped with two 12 core AMD Opteron and 64 GB
> RAM. Intel Compiler 11.1 + MKL + OpenMPI was the tool-chain.
>
> The problem is 3D Helmholtz equation, 1.4 Mio of unknowns. The matrix is
> symmetric thus I used LDL^T for both.
> First of all, both PaStiX and MUMPS gave correct solution with the relative
> residual < 1e-12, although the test case was not numerically difficult.
>
> Below are tables, showing time for analysis+factorization (seconds) and
> overall memory usage (megabytes).
>
> PASTIX:
> N_cpus    T_fac memory
> 1    9.27E+03    27900
> 4    5.28E+03    33200
> 16    1.44E+03    77700
> 32    755    131377
> 64    471    225399
>
> MUMPS:
> N_cpus    T_fac memory
> 1    8009    49689
> 4    2821    63501
> 16    1375    84115
> 32    1081    86583
> 64    733    98235
>
> According to this test, PaStiX is slightly faster when run on more cores,
> but also consumes much more memory. Which is opposite to what Garth said.
> Either I did something wrong or our matrices are very different.
>
> PS Can anyone explain why direct solvers require more memory when run in
> parallel?
>
>
> On 10.11.2012 14:14, Alexander Grayver wrote:
>
> Garth,
>
> At the time I was tested PaStiX it failed for my problem:
> https://lists.mcs.anl.gov/mailman/htdig/petsc-dev/2011-December/006887.html
>
> Since then PaStiX has been updated with several critical bug fixes, so I
> should consider testing new version.
>
> The memory scalability of the MUMPS is not nice, that is true.
> Running MUMPS with default parameters on large amount of cores is often not
> optimal. I don't how much you spent tweaking parameters.
> MUMPS is among the most robust distributed solvers nowadays and it is still
> being developed and hopefully will improve.
>
> To petsc developers: are there plans to update PaStiX supplied with PETSc?
> The current version is 5.2 from 2012-06-08 and PETSc-3.3-p3 uses 5.1.8 from
> 2011-02-23.
>
> Here is changelog:
> https://gforge.inria.fr/frs/shownotes.php?group_id=186&release_id=7096
>
> --
> Regards,
> Alexander



More information about the petsc-dev mailing list