[petsc-dev] [mumps-dev] support for distributed right-hand vectors?
Barry Smith
bsmith at mcs.anl.gov
Fri Nov 9 16:20:41 CST 2012
Garth,
Thanks for the info. It is unlikely we will drop MUMPS but we should likely robustly our interface with PaStiX. In particular with the hybrid threads-MPI PETSc stuff Shri and Jed are working on the hybrid PaStiX fits naturally. So any patches you wish to submit for the PETSc-PaStiX interface or ideas on how to improve/extend it are appreciated. The honest truth is that currently it is pretty much an unattended baby and a little love would help it.
Barry
On Nov 9, 2012, at 12:40 PM, Garth N. Wells <gnw20 at cam.ac.uk> wrote:
> I've only just joined the petsc-dev list, but I'm hoping with this
> subject line my email will join the right thread . . . . (related to
> MUMPS).
>
> I've been experimenting over the past year with MUMPS and PaStiX for
> parallel LU, and found MUMPS pretty much useless because it uses so
> much memory. PaStiX was vastly superior performance-wise and it
> supports hybrid threads-MPI, which I think is essential for parallel
> LU solvers to make good use of typical multi-socket multi-core compute
> nodes. The interface, build and documentation are a bit clunky (I put
> the last point down to developer language issues), but the performance
> is good and the developers are responsive. I benchmarked PaStiX for P1
> and P2 3D linear elastic finite element problems against a leading
> commercial offering, and PaStiX was marginally faster for P1 and
> marginally slower for P2 (PaStiX performance does depend heavily on
> BLAS). I couldn't even compute the test problems with MUMPS because it
> would blow out the memory. For reference, I tested systems up to 27M
> dofs with PaStiX.
>
> Based on my experience and tests, I'd be happy to see PETSc drop MUMPS
> and focus/enhance/fix support for PaStiX.
>
> Garth
More information about the petsc-dev
mailing list