[petsc-users] Rock & Hard Place with SuperLU

Hong Zhang hzhang at mcs.anl.gov
Mon Jan 28 23:09:30 CST 2013


Brian:
Add option '-mat_superlu_dist_equil false'. Do you still get same behavior?

The matrix 'rows=6957, cols=6957' is very small. Run it sequentially
using superlu with option '-mat_superlu_conditionnumber'. Let us know
the estimated 'Recip. condition number'.
It seems your matrix is very ill-conditioned.

Hong

> Hi Again,
>
> Thanks for everybody's prompt help.  I got petsc3.2-p7 running again and
> that works so at least I am able to get results again.
>
> I ran my test case with petsc3.2-p7 and petsc3.3-p5 and turned on ksp_view
> output.   The file "output" is identical except for one line:
>
> tolerance for zero pivot 2.22045e-14 for petsc3.3-p5 and
> tolerance for zero pivot 1e-12 for petsc3.2-p7
>
> This is the file "output" from ksp_view
>
> KSP Object: 2 MPI processes
>   type: preonly
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>   left preconditioning
>   using NONE norm type for convergence test
> PC Object: 2 MPI processes
>   type: lu
>     LU: out-of-place factorization
>     tolerance for zero pivot 2.22045e-14
>     matrix ordering: natural
>     factor fill ratio given 0, needed 0
>       Factored matrix follows:
>         Matrix Object:         2 MPI processes
>           type: mpiaij
>           rows=6957, cols=6957
>           package used to perform factorization: superlu_dist
>           total: nonzeros=0, allocated nonzeros=0
>           total number of mallocs used during MatSetValues calls =0
>             SuperLU_DIST run parameters:
>               Process grid nprow 2 x npcol 1
>               Equilibrate matrix TRUE
>               Matrix input mode 1
>               Replace tiny pivots TRUE
>               Use iterative refinement FALSE
>               Processors in row 2 col partition 1
>               Row permutation LargeDiag
>               Column permutation METIS_AT_PLUS_A
>               Parallel symbolic factorization FALSE
>               Repeated factorization SamePattern_SameRowPerm
>   linear system matrix = precond matrix:
>   Matrix Object:   2 MPI processes
>     type: mpiaij
>     rows=6957, cols=6957
>     total: nonzeros=611043, allocated nonzeros=0
>     total number of mallocs used during MatSetValues calls =0
>       using I-node (on process 0) routines: found 1407 nodes, limit used is
> 5
>
> The residual vector going into the KSP inversion is the same, but the
> inversion gives a different answer
>
> petsc3.3-p5:
>
> jacobian made 4.628e-01 seconds
> matrix inverted 9.669e-01 seconds
> # iterations 1 residual0 1.824e-05 du 4.290e-05 solve time: 1.821e-02
> seconds
>
> petsc3.2-p7:
>
> jacobian made 4.279e-01 seconds
> matrix inverted 6.854e-01 seconds
> # iterations 1 residual0 1.824e-05 du 1.885e-05 solve time: 1.284e-02
> seconds
>
> Where the output is calculated as:
>
> double resmax;
> VecNorm(petsc_f, NORM_2, &resmax );
>
>
> PetscGetTime(&time1);
> err = KSPSolve(ksp,petsc_f,petsc_du);
> CHKERRABORT(MPI_COMM_WORLD,err);
>
>
> double resmax2;
> VecNorm(petsc_du, NORM_2, &resmax2 );
>
>
> KSPGetIterationNumber(ksp,&its);
> PetscGetTime(&time2);
> *gbl->log << "# iterations " << its << " residual0 " << resmax << " du " <<
> resmax2 << " solve time: " << time2-time1 << " seconds" << endl;
>
>
> I can output the jacobian and make sure that is the same, but I'm guessing
> that it is.     Any other suggestions of things to try?   I'd be surprised
> if the zero-pivot thing had much to do with it although I could be wrong.
> I've arranged the matrices to try and make them diagonally dominant going
> in.   Does anyone know how to change that setting of the top of their head
> (i.e. without me reading the manual).
>
> Thanks again,
>
> Brian
>
>
>
>
>
>
>
>


More information about the petsc-users mailing list