[petsc-users] LU Performance

Smith, Barry F. bsmith at mcs.anl.gov
Fri Jul 5 03:02:23 CDT 2019


   When you use Umfpack standalone do you use OpenMP threads? When you use umfpack alone do you us thread enabled BLAS/LAPACK? Perhaps OpenBLAS or MKL?

   You can run both cases with -ksp_view and it will print more details indicating indicating the solver used.

    Do you use the same compiler and same options when compiling PETSc and Umfpack standalone. Is the Umfpack standalone time in the numerical factorization much smaller? Perhaps umfpack is using a much better ordering then when used with PETSc (perhaps the default orderings are different).

   Does Umfpack has a routine that tiggers output of the parameters etc it is using? If you can trigger it you might see differences between standalone and not.

   Barry


> On Jul 4, 2019, at 4:05 PM, Jared Crean via petsc-users <petsc-users at mcs.anl.gov> wrote:
> 
> Hello,
> 
>     I am getting very bad performance from the Umfpack LU solver when I use it via Petsc compared to calling Umfpack directly. It takes about 5.5 seconds to factor and solve the matrix with Umfpack, but 140 seconds when I use Petsc with -ksp_type preonly -pc_type lu -pc_factor_mat_solver_type umfpack.
> 
>     I have attached a minimal example (test.c) that reads a matrix from a file, solves with Umfpack, and then solves with Petsc.  The matrix data files are not included because they are about 250 megabytes.  I also attached the output of the program with -log_view for -pc_factor_mat_solver_type umfpack (fout_umfpacklu) and -pc_factor_mat_solver_type petsc (fout_petsclu).  Both results show nearly all of the time is spent in MatLuFactorNum.  The times are very similar, so I am wondering if Petsc is really calling Umfpack or if the Petsc LU solver is getting called in both cases.
> 
> 
>     Jared Crean
> 
> <test_files.tar.gz>



More information about the petsc-users mailing list