[petsc-users] Krylov Method Takes Too Long to Solve

Barry Smith bsmith at mcs.anl.gov
Fri Apr 22 16:25:51 CDT 2016


   How large are the problems you really want to solve? This is a pretty small problem for iterative methods, direct solver may be faster.

  ILU/ICC methods are not particularly good for solid mechanics and probably terrible for mix formations (if I understand what you mean by mixed formulation).

   Barry

> On Apr 22, 2016, at 4:09 PM, Jie Cheng <chengj5 at rpi.edu> wrote:
> 
> Hi 
> 
> I’m implementing finite element method on nonlinear solid mechanics. The main portion of my code that involves PETSc is that in each step, the tangent stiffness matrix A is formed and the increment of the nodal degrees of freedom is solved. Standard Newton’s iteration. The problem is: when I use Krylov methods to solve the linear system, the KSPsolve process takes too long, although only 2 or 3 iterations are needed. 
> 
> The finite element formulation is the displacement/pressure mixed formulation, which I believe is symmetric and positive-definite. However if I pick conjugate gradient method with ICC preconditioned, PETSc gives me a -8 converged reason, which indicates a non-positive-definite matrix. After a couple of trials and errors, the only pair that works is GMRES plus PCKSP. But as I said, the KSPsolve function takes too much time.
> 
> A typical problem I’m trying has 16906 rows and 16906 cols. The message printed out by ksp_view is as following:
> 
> KSP Object: 1 MPI processes
>   type: gmres
>     GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>     GMRES: happy breakdown tolerance 1e-30
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: 1 MPI processes
>   type: ksp
>   KSP and PC on KSP preconditioner follow
>   ---------------------------------
>     KSP Object:    (ksp_)     1 MPI processes
>       type: gmres
>         GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>         GMRES: happy breakdown tolerance 1e-30
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
>       left preconditioning
>       using PRECONDITIONED norm type for convergence test
>     PC Object:    (ksp_)     1 MPI processes
>       type: ilu
>         ILU: out-of-place factorization
>         0 levels of fill
>         tolerance for zero pivot 2.22045e-14
>         matrix ordering: natural
>         factor fill ratio given 1, needed 1
>           Factored matrix follows:
>             Mat Object:             1 MPI processes
>               type: seqaij
>               rows=16906, cols=16906
>               package used to perform factorization: petsc
>               total: nonzeros=988540, allocated nonzeros=988540
>               total number of mallocs used during MatSetValues calls =0
>                 using I-node routines: found 7582 nodes, limit used is 5
>       linear system matrix = precond matrix:
>       Mat Object:       1 MPI processes
>         type: seqaij
>         rows=16906, cols=16906
>         total: nonzeros=988540, allocated nonzeros=988540
>         total number of mallocs used during MatSetValues calls =0
>           using I-node routines: found 7582 nodes, limit used is 5
>   ---------------------------------
>   linear system matrix = precond matrix:
>   Mat Object:   1 MPI processes
>     type: seqaij
>     rows=16906, cols=16906
>     total: nonzeros=988540, allocated nonzeros=988540
>     total number of mallocs used during MatSetValues calls =0
>       using I-node routines: found 7582 nodes, limit used is 5
> 
> Could anyone give me any suggestion please?
> 
> Thanks
> Jie Cheng



More information about the petsc-users mailing list