[petsc-users] Using direct solvers in parallel

Dave May dave.mayhem23 at gmail.com
Tue May 15 02:36:18 CDT 2012


I have seem similar behaviour comparing umfpack and superlu_dist,
however the difference wasn't enormous, possibly umfpack was a factor
of 1.2-1.4 times faster on 1 - 4 cores.
What sort of time differences are you observing? Can you post the
numbers somewhere?

However, umpack will not work on a distributed memory machine.
My personal preference is to use superlu_dist in parallel. In my
experience using it as a coarse grid solver for multigrid, I find it
much more reliable than mumps. However, when mumps works, its is
typically slightly faster than superlu_dist. Again, not by a large
amount - never more than a factor of 2 faster.

The failure rate using mumps is definitely higher (in my experience)
when running on large numbers of cores compared to superlu_dist. I've
never got to the bottom as to why it fails.

Cheers,
  Dave


On 15 May 2012 09:25, Thomas Witkowski <thomas.witkowski at tu-dresden.de> wrote:
> I made some comparisons of using umfpack, superlu, superlu_dist and mumps to
> solve systems with sparse matrices arising from finite element method. The
> size of the matrices range from around 50000 to more than 3 million
> unknowns. I used 1, 2, 4, 8 and 16 nodes to make the benchmark. Now, I
> wonder that in all cases the sequential umfpack was the fastest one. So even
> with 16 cores, superlu_dist and mumps are slower. Can anybody of you confirm
> this observation? Are there any other parallel direct solvers around which
> are more efficient?
>
> Thomas


More information about the petsc-users mailing list