[petsc-dev] PETSc LU vs SuperLU

Hong Zhang hzhang at mcs.anl.gov
Tue Dec 20 09:47:38 CST 2011


Dave :
I've observed same performance. PETSc LU uses simple algorithm and
implementation,
and our recent releases pay particular attention to efficient data
accessing in solve phase,
thus it outperforms other packages sometime.

SuperLU has built-in schemes, such at row/col permutation, equil etc,
which might compromise
performance. I only occasionally see better performance of
SuperLU/SuperLU_DIST on very large
and special applications.

For sequential LU, mumps usually gives better performance, but likely
consumes more memory
than others. Did you encounter memory problem from mumps? Pay
attention to mumps error
message when it gives  seg fault.

Hong

> I have been comparing sequential SuperLU on one of my linear solves versus
> PETSc LU.  I am finding SuperLU to be a little over 2x slower than PETSc LU.
> I was wondering if this is due to SuperLU not being tuned to my problem or if
> the PETSc LU algorithm performance is expected to be superior to that of
> SuperLU in general.  I did play around with the reordering options for
> SuperLU but did not find anything superior to the defaults.  I was also
> wondering if building PETSc and its external packages with another compiler
> such as PGI or Intel might result in higher performance in this regard.  Or
> whether using a vendor blas like MKL would speed up SuperLU.  Or perhaps the
> interface of SuperLU to PETSc results in some extra data copying that is the
> difference.
>
> Does anyone have any idea why SuperLU might be that much slower than PETSc
> LU?
>
> I also tried spooles and that was just a little slower than PETSc LU.  And I
> tried MUMPS and that seg faulted after my problem had been running over an
> hour.  This particular problem was running for less than 3 minutes with PETSc
> LU.
>
> I would be interested in any suggestions of things to try to speed up my LU
> solve with either PETSc or any of the external packages.  Right now, I'm just
> doing serial, single node calculations.
>
> Thanks,
>
> Dave



More information about the petsc-dev mailing list