[petsc-users] subprocess (block) tolerance.

Jed Brown jed at jedbrown.org
Thu Oct 4 17:47:19 CDT 2018


The subdomain KSP (flow_sub_) has type "preonly" so it always does
exactly one iteration.  If you were to use an iterative subdomain solver
(e.g., -flow_sub_ksp_type gmres) then those tolerances would be used.

HeeHo Park <heeho.park at gmail.com> writes:

> Hi, I'm running PFLOTRAN and in PFLOTRAN, we have flow_ and flow_sub_
> processes. I was wondering what the red underlined values meant (each block
> tolerance?) and how to change them (would it affect convergence?). Blue
> marked bold values are changed from the default values for linear solvers.
>
> FLOW Linear Solver
>                        solver: bcgs
>                preconditioner: asm
>                          *atol: 1.000000E-10*
> *                         rtol: 1.000000E-10*
>                          dtol: 1.000000E+04
>             maximum iteration: 10000
> KSP Object: (flow_) 8 MPI processes
>   type: bcgs
>   maximum iterations=10000, initial guess is zero
>   tolerances: * relative=1e-10, absolute=1e-10*, divergence=10000.
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: (flow_) 8 MPI processes
>   type: asm
>     Additive Schwarz: total subdomain blocks = 8, amount of overlap = 1
>     Additive Schwarz: restriction/interpolation type - RESTRICT
>     [0] number of local blocks = 1
>     [1] number of local blocks = 1
>     [2] number of local blocks = 1
>     [3] number of local blocks = 1
>     [4] number of local blocks = 1
>     [5] number of local blocks = 1
>     [6] number of local blocks = 1
>     [7] number of local blocks = 1
>     Local solve info for each block is in the following KSP and PC objects:
>     - - - - - - - - - - - - - - - - - -
>     [0] local block number 0, size = 1389
>     KSP Object: (flow_sub_) 1 MPI processes
>       type: preonly
>       maximum iterations=10000, initial guess is zero
>>>>      tolerances:  *relative=1e-05, absolute=1e-50*, divergence=10000.
>       left preconditioning
>       using DEFAULT norm type for convergence test
>     PC Object: (flow_sub_) 1 MPI processes
>       type: ilu
>       PC has not been set up so information may be incomplete
>         out-of-place factorization
>         0 levels of fill
>         tolerance for zero pivot 2.22045e-14
>         using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>         matrix ordering: natural
>       linear system matrix = precond matrix:
>       Mat Object: (flow_) 1 MPI processes
>         type: seqbaij
>         rows=1389, cols=1389, bs=3
>         total: nonzeros=20025, allocated nonzeros=20025
>         total number of mallocs used during MatSetValues calls =0
>             block size is 3
>     - - - - - - - - - - - - - - - - - -
>
> -- 
> HeeHo Daniel Park


More information about the petsc-users mailing list