[petsc-users] Issue using multi-grid as a pre-conditioner with KSP for a Poisson problem

Matthew Knepley knepley at gmail.com
Fri Jun 23 05:36:45 CDT 2017


On Fri, Jun 23, 2017 at 12:54 AM, Jason Lefley <jason.lefley at aclectic.com>
wrote:

>
> On Jun 22, 2017, at 5:35 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Thu, Jun 22, 2017 at 3:20 PM, Jason Lefley <jason.lefley at aclectic.com>
> wrote:
>
>> Thanks for the prompt replies. I ran with gamg and the results look more
>> promising. I tried the suggested -mg_* options and did not see improvement.
>> The -ksp_view and -ksp_monitor_true_residual output from those tests and
>> the solver_test source (modified ex34.c) follow:
>>
>
> Okay, the second step is to replicate the smoother for the GMG, which will
> have a smaller and scalable setup time. The
> smoother could be weak, or the restriction could be bad.
>
>
> I inspected the ksp_view output from the run with -pc_type gamg and ran
> the program again with -pc_type mg and the pre-conditioner options from the
> gamg run:
>
> $ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
> -ksp_view -ksp_monitor_true_residual -ksp_type cg -pc_type mg -pc_mg_levels
> 5 -mg_coarse_ksp_type preonly -mg_coarse_pc_type bjacobi
> -mg_coarse_sub_ksp_type preonly -mg_coarse_sub_pc_type lu
> -mg_coarse_sub_pc_factor_shift_type INBLOCKS -mg_levels_ksp_type
> chebyshev -mg_levels_pc_type sor -mg_levels_esteig_ksp_type gmres
> right hand side 2 norm: 512.
> right hand side infinity norm: 0.999097
> building operator with Dirichlet boundary conditions, global grid size:
> 128 x 128 x 128
> building operator with Dirichlet boundary conditions, global grid size: 16
> x 16 x 16
> building operator with Dirichlet boundary conditions, global grid size: 32
> x 32 x 32
> building operator with Dirichlet boundary conditions, global grid size: 64
> x 64 x 64
> building operator with Dirichlet boundary conditions, global grid size: 8
> x 8 x 8
>   0 KSP preconditioned resid norm 9.806726045668e+02 true resid norm
> 5.120000000000e+02 ||r(i)||/||b|| 1.000000000000e+00
>   1 KSP preconditioned resid norm 3.621361277232e+02 true resid norm
> 1.429430352211e+03 ||r(i)||/||b|| 2.791856156662e+00
>   2 KSP preconditioned resid norm 2.362961522860e+01 true resid norm
> 1.549620746006e+03 ||r(i)||/||b|| 3.026603019544e+00
>   3 KSP preconditioned resid norm 7.695073339717e+01 true resid norm
> 1.542148820317e+03 ||r(i)||/||b|| 3.012009414681e+00
>   4 KSP preconditioned resid norm 3.765270470793e+01 true resid norm
> 1.536405551882e+03 ||r(i)||/||b|| 3.000792093520e+00
>   5 KSP preconditioned resid norm 6.761970780882e+01 true resid norm
> 1.502842623846e+03 ||r(i)||/||b|| 2.935239499698e+00
>   6 KSP preconditioned resid norm 5.995995646652e+01 true resid norm
> 1.447456652501e+03 ||r(i)||/||b|| 2.827063774415e+00
>   7 KSP preconditioned resid norm 4.388139142285e+01 true resid norm
> 1.413766393419e+03 ||r(i)||/||b|| 2.761262487146e+00
>   8 KSP preconditioned resid norm 2.295909410512e+01 true resid norm
> 1.371727148377e+03 ||r(i)||/||b|| 2.679154586673e+00
>   9 KSP preconditioned resid norm 1.961908891359e+01 true resid norm
> 1.339113282715e+03 ||r(i)||/||b|| 2.615455630302e+00
>  10 KSP preconditioned resid norm 6.893687291220e+01 true resid norm
> 1.229592829746e+03 ||r(i)||/||b|| 2.401548495598e+00
>  11 KSP preconditioned resid norm 3.833567365382e+01 true resid norm
> 1.118085982483e+03 ||r(i)||/||b|| 2.183761684536e+00
>  12 KSP preconditioned resid norm 1.939604089596e+01 true resid norm
> 9.852672187664e+02 ||r(i)||/||b|| 1.924350036653e+00
>  13 KSP preconditioned resid norm 2.252075208204e+01 true resid norm
> 8.159187018709e+02 ||r(i)||/||b|| 1.593591214592e+00
>  14 KSP preconditioned resid norm 2.642782719810e+01 true resid norm
> 7.253214735753e+02 ||r(i)||/||b|| 1.416643503077e+00
>  15 KSP preconditioned resid norm 2.548817259250e+01 true resid norm
> 6.070018478722e+02 ||r(i)||/||b|| 1.185550484125e+00
>  16 KSP preconditioned resid norm 5.281972692525e+01 true resid norm
> 4.815894400238e+02 ||r(i)||/||b|| 9.406043750466e-01
>  17 KSP preconditioned resid norm 2.402884696592e+01 true resid norm
> 4.144462871860e+02 ||r(i)||/||b|| 8.094654046603e-01
>  18 KSP preconditioned resid norm 1.043080941574e+01 true resid norm
> 3.729148183697e+02 ||r(i)||/||b|| 7.283492546283e-01
>  19 KSP preconditioned resid norm 1.490375076082e+01 true resid norm
> 3.122057027160e+02 ||r(i)||/||b|| 6.097767631172e-01
>  20 KSP preconditioned resid norm 3.249426166084e+00 true resid norm
> 2.704136970440e+02 ||r(i)||/||b|| 5.281517520390e-01
>  21 KSP preconditioned resid norm 4.898441103047e+00 true resid norm
> 2.346045017813e+02 ||r(i)||/||b|| 4.582119175416e-01
>  22 KSP preconditioned resid norm 6.674657659594e+00 true resid norm
> 1.870390126135e+02 ||r(i)||/||b|| 3.653105715107e-01
>  23 KSP preconditioned resid norm 5.475921158065e+00 true resid norm
> 1.732176093821e+02 ||r(i)||/||b|| 3.383156433245e-01
>  24 KSP preconditioned resid norm 2.776421930727e+00 true resid norm
> 1.562809743536e+02 ||r(i)||/||b|| 3.052362780343e-01
>  25 KSP preconditioned resid norm 3.424602247354e+00 true resid norm
> 1.375628929963e+02 ||r(i)||/||b|| 2.686775253835e-01
>  26 KSP preconditioned resid norm 2.212037280808e+00 true resid norm
> 1.221828497054e+02 ||r(i)||/||b|| 2.386383783309e-01
>  27 KSP preconditioned resid norm 1.365474968893e+00 true resid norm
> 1.082476112493e+02 ||r(i)||/||b|| 2.114211157213e-01
>  28 KSP preconditioned resid norm 2.638907538318e+00 true resid norm
> 8.864935716757e+01 ||r(i)||/||b|| 1.731432757179e-01
>  29 KSP preconditioned resid norm 1.719908158919e+00 true resid norm
> 7.632670876324e+01 ||r(i)||/||b|| 1.490756030532e-01
>  30 KSP preconditioned resid norm 7.985033219249e-01 true resid norm
> 6.949169231958e+01 ||r(i)||/||b|| 1.357259615617e-01
>  31 KSP preconditioned resid norm 3.811670663811e+00 true resid norm
> 6.151000812796e+01 ||r(i)||/||b|| 1.201367346249e-01
>  32 KSP preconditioned resid norm 7.888148376757e+00 true resid norm
> 5.694823999920e+01 ||r(i)||/||b|| 1.112270312484e-01
>  33 KSP preconditioned resid norm 7.545633821809e-01 true resid norm
> 4.589854278402e+01 ||r(i)||/||b|| 8.964559137503e-02
>  34 KSP preconditioned resid norm 2.271801800991e+00 true resid norm
> 3.728668301821e+01 ||r(i)||/||b|| 7.282555276994e-02
>  35 KSP preconditioned resid norm 3.961087334680e+00 true resid norm
> 3.169140910721e+01 ||r(i)||/||b|| 6.189728341253e-02
>  36 KSP preconditioned resid norm 9.139405064634e-01 true resid norm
> 2.825299509385e+01 ||r(i)||/||b|| 5.518163104268e-02
>  37 KSP preconditioned resid norm 3.403605053170e-01 true resid norm
> 2.102215336663e+01 ||r(i)||/||b|| 4.105889329421e-02
>  38 KSP preconditioned resid norm 4.614799224677e-01 true resid norm
> 1.651863757642e+01 ||r(i)||/||b|| 3.226296401644e-02
>  39 KSP preconditioned resid norm 1.996074237552e+00 true resid norm
> 1.439868559977e+01 ||r(i)||/||b|| 2.812243281205e-02
>  40 KSP preconditioned resid norm 1.106018322401e+00 true resid norm
> 1.313250681787e+01 ||r(i)||/||b|| 2.564942737865e-02
>  41 KSP preconditioned resid norm 2.639402464711e-01 true resid norm
> 1.164910167179e+01 ||r(i)||/||b|| 2.275215170271e-02
>  42 KSP preconditioned resid norm 1.749941228669e-01 true resid norm
> 1.053438524789e+01 ||r(i)||/||b|| 2.057497118729e-02
>  43 KSP preconditioned resid norm 6.464433193720e-01 true resid norm
> 9.105614545741e+00 ||r(i)||/||b|| 1.778440340965e-02
>  44 KSP preconditioned resid norm 5.990029838187e-01 true resid norm
> 8.803151647663e+00 ||r(i)||/||b|| 1.719365556184e-02
>  45 KSP preconditioned resid norm 1.871777684116e-01 true resid norm
> 8.140591972598e+00 ||r(i)||/||b|| 1.589959369648e-02
>  46 KSP preconditioned resid norm 4.316459571157e-01 true resid norm
> 7.640223567698e+00 ||r(i)||/||b|| 1.492231165566e-02
>  47 KSP preconditioned resid norm 9.563142801536e-02 true resid norm
> 7.094762567710e+00 ||r(i)||/||b|| 1.385695814006e-02
>  48 KSP preconditioned resid norm 2.380088757747e-01 true resid norm
> 6.064559746487e+00 ||r(i)||/||b|| 1.184484325486e-02
>  49 KSP preconditioned resid norm 2.230779501200e-01 true resid norm
> 4.923827478633e+00 ||r(i)||/||b|| 9.616850544205e-03
>  50 KSP preconditioned resid norm 2.905071000609e-01 true resid norm
> 4.426620956264e+00 ||r(i)||/||b|| 8.645744055203e-03
>  51 KSP preconditioned resid norm 3.430194707482e-02 true resid norm
> 3.873957688918e+00 ||r(i)||/||b|| 7.566323611167e-03
>  52 KSP preconditioned resid norm 4.329652082337e-02 true resid norm
> 3.430571122778e+00 ||r(i)||/||b|| 6.700334224177e-03
>  53 KSP preconditioned resid norm 1.610976212900e-01 true resid norm
> 3.052757228648e+00 ||r(i)||/||b|| 5.962416462203e-03
>  54 KSP preconditioned resid norm 6.113252183681e-02 true resid norm
> 2.876793151138e+00 ||r(i)||/||b|| 5.618736623317e-03
>  55 KSP preconditioned resid norm 2.731463237482e-02 true resid norm
> 2.441017091077e+00 ||r(i)||/||b|| 4.767611506010e-03
>  56 KSP preconditioned resid norm 5.193746161496e-02 true resid norm
> 2.114917193241e+00 ||r(i)||/||b|| 4.130697643049e-03
>  57 KSP preconditioned resid norm 2.959513516137e-01 true resid norm
> 1.903828747377e+00 ||r(i)||/||b|| 3.718415522220e-03
>  58 KSP preconditioned resid norm 8.093802579621e-02 true resid norm
> 1.759070727559e+00 ||r(i)||/||b|| 3.435685014763e-03
>  59 KSP preconditioned resid norm 3.558590388480e-02 true resid norm
> 1.356337866126e+00 ||r(i)||/||b|| 2.649097394777e-03
>  60 KSP preconditioned resid norm 6.506508837044e-02 true resid norm
> 1.214979249890e+00 ||r(i)||/||b|| 2.373006347441e-03
>  61 KSP preconditioned resid norm 3.120758675191e-02 true resid norm
> 9.993321163196e-01 ||r(i)||/||b|| 1.951820539687e-03
>  62 KSP preconditioned resid norm 1.034431089486e-01 true resid norm
> 9.193137244810e-01 ||r(i)||/||b|| 1.795534618127e-03
>  63 KSP preconditioned resid norm 2.763120051285e-02 true resid norm
> 8.479698661132e-01 ||r(i)||/||b|| 1.656191144752e-03
>  64 KSP preconditioned resid norm 1.937546528918e-02 true resid norm
> 7.431839535619e-01 ||r(i)||/||b|| 1.451531159301e-03
>  65 KSP preconditioned resid norm 2.133391792161e-02 true resid norm
> 7.089428437765e-01 ||r(i)||/||b|| 1.384653991751e-03
>  66 KSP preconditioned resid norm 8.676771000819e-03 true resid norm
> 6.511166875850e-01 ||r(i)||/||b|| 1.271712280439e-03
> KSP Object: 16 MPI processes
>   type: cg
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: 16 MPI processes
>   type: mg
>     MG: type is MULTIPLICATIVE, levels=5 cycles=v
>       Cycles per PCApply=1
>       Not using Galerkin computed coarse grid matrices
>   Coarse grid solver -- level -------------------------------
>     KSP Object:    (mg_coarse_)     16 MPI processes
>       type: preonly
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using NONE norm type for convergence test
>     PC Object:    (mg_coarse_)     16 MPI processes
>       type: bjacobi
>         block Jacobi: number of blocks = 16
>         Local solve is same for all blocks, in the following KSP and PC
> objects:
>       KSP Object:      (mg_coarse_sub_)       1 MPI processes
>         type: preonly
>         maximum iterations=10000, initial guess is zero
>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>         left preconditioning
>         using NONE norm type for convergence test
>       PC Object:      (mg_coarse_sub_)       1 MPI processes
>         type: lu
>           LU: out-of-place factorization
>           tolerance for zero pivot 2.22045e-14
>           using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>           matrix ordering: nd
>           factor fill ratio given 5., needed 2.3125
>             Factored matrix follows:
>               Mat Object:               1 MPI processes
>                 type: seqaij
>                 rows=32, cols=32
>                 package used to perform factorization: petsc
>                 total: nonzeros=370, allocated nonzeros=370
>                 total number of mallocs used during MatSetValues calls =0
>                   not using I-node routines
>         linear system matrix = precond matrix:
>         Mat Object:         1 MPI processes
>           type: seqaij
>           rows=32, cols=32
>           total: nonzeros=160, allocated nonzeros=160
>           total number of mallocs used during MatSetValues calls =0
>             not using I-node routines
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=512, cols=512
>         total: nonzeros=3200, allocated nonzeros=3200
>         total number of mallocs used during MatSetValues calls =0
>   Down solver (pre-smoother) on level 1 -------------------------------
>     KSP Object:    (mg_levels_1_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.153005, max = 1.68306
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_1_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_1_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=4096, cols=4096
>         total: nonzeros=27136, allocated nonzeros=27136
>         total number of mallocs used during MatSetValues calls =0
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   Down solver (pre-smoother) on level 2 -------------------------------
>     KSP Object:    (mg_levels_2_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.152793, max = 1.68072
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_2_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_2_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=32768, cols=32768
>         total: nonzeros=223232, allocated nonzeros=223232
>         total number of mallocs used during MatSetValues calls =0
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   Down solver (pre-smoother) on level 3 -------------------------------
>     KSP Object:    (mg_levels_3_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.144705, max = 1.59176
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_3_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_3_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=262144, cols=262144
>         total: nonzeros=1810432, allocated nonzeros=1810432
>         total number of mallocs used during MatSetValues calls =0
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   Down solver (pre-smoother) on level 4 -------------------------------
>     KSP Object:    (mg_levels_4_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.140039, max = 1.54043
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_4_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_4_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=2097152, cols=2097152
>         total: nonzeros=14581760, allocated nonzeros=14581760
>         total number of mallocs used during MatSetValues calls =0
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   linear system matrix = precond matrix:
>   Mat Object:   16 MPI processes
>     type: mpiaij
>     rows=2097152, cols=2097152
>     total: nonzeros=14581760, allocated nonzeros=14581760
>     total number of mallocs used during MatSetValues calls =0
> Residual 2 norm 0.651117
> Residual infinity norm 0.00799571
>
>
> I did a diff on the ksp_view from the above run from the output from the
> run with -pc_type gamg and the only differences include the needed factor
> fill ratio (gamg: 1, mg: 2.3125), the size and non-zero counts of the
> matrices used in the multi-grid levels, the Chebyshev eigenvalue estimates,
> and the usage of I-node routines (gamg:  using I-node routines: found 3
> nodes, limit used is 5, mg: not using I-node routines).
>
> Adding -pc_mg_galerkin results in some improvement but still not as good
> as with gamg:
>
> $ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
> -ksp_view -ksp_monitor_true_residual -ksp_type cg -pc_type mg -pc_mg_levels
> 5 -mg_coarse_ksp_type preonly -mg_coarse_pc_type bjacobi
> -mg_coarse_sub_ksp_type preonly -mg_coarse_sub_pc_type lu
> -mg_coarse_sub_pc_factor_shift_type INBLOCKS -mg_levels_ksp_type
> chebyshev -mg_levels_pc_type sor -mg_levels_esteig_ksp_type gmres
> -pc_mg_galerkin
> right hand side 2 norm: 512.
> right hand side infinity norm: 0.999097
> building operator with Dirichlet boundary conditions, global grid size:
> 128 x 128 x 128
>   0 KSP preconditioned resid norm 1.073621701581e+00 true resid norm
> 5.120000000000e+02 ||r(i)||/||b|| 1.000000000000e+00
>   1 KSP preconditioned resid norm 2.316341151889e-01 true resid norm
> 1.169096072553e+03 ||r(i)||/||b|| 2.283390766706e+00
>   2 KSP preconditioned resid norm 1.054910990128e-01 true resid norm
> 4.444993518786e+02 ||r(i)||/||b|| 8.681627966378e-01
>   3 KSP preconditioned resid norm 3.671488511570e-02 true resid norm
> 1.169431518627e+02 ||r(i)||/||b|| 2.284045934818e-01
>   4 KSP preconditioned resid norm 1.055769111265e-02 true resid norm
> 3.161333456265e+01 ||r(i)||/||b|| 6.174479406767e-02
>   5 KSP preconditioned resid norm 2.557907008002e-03 true resid norm
> 9.319742572653e+00 ||r(i)||/||b|| 1.820262221221e-02
>   6 KSP preconditioned resid norm 5.039866236685e-04 true resid norm
> 2.418858575838e+00 ||r(i)||/||b|| 4.724333155934e-03
>   7 KSP preconditioned resid norm 1.132965683654e-04 true resid norm
> 4.979511177091e-01 ||r(i)||/||b|| 9.725607767757e-04
>   8 KSP preconditioned resid norm 5.458028025084e-05 true resid norm
> 1.150321233127e-01 ||r(i)||/||b|| 2.246721158452e-04
>   9 KSP preconditioned resid norm 3.742558792121e-05 true resid norm
> 8.485603638598e-02 ||r(i)||/||b|| 1.657344460664e-04
>  10 KSP preconditioned resid norm 1.121838737544e-05 true resid norm
> 4.699890661073e-02 ||r(i)||/||b|| 9.179473947407e-05
>  11 KSP preconditioned resid norm 4.452473763175e-06 true resid norm
> 1.071140093264e-02 ||r(i)||/||b|| 2.092070494657e-05
> KSP Object: 16 MPI processes
>   type: cg
>   maximum iterations=10000, initial guess is zero
>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>   left preconditioning
>   using PRECONDITIONED norm type for convergence test
> PC Object: 16 MPI processes
>   type: mg
>     MG: type is MULTIPLICATIVE, levels=5 cycles=v
>       Cycles per PCApply=1
>       Using Galerkin computed coarse grid matrices
>   Coarse grid solver -- level -------------------------------
>     KSP Object:    (mg_coarse_)     16 MPI processes
>       type: preonly
>       maximum iterations=10000, initial guess is zero
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using NONE norm type for convergence test
>     PC Object:    (mg_coarse_)     16 MPI processes
>       type: bjacobi
>         block Jacobi: number of blocks = 16
>         Local solve is same for all blocks, in the following KSP and PC
> objects:
>       KSP Object:      (mg_coarse_sub_)       1 MPI processes
>         type: preonly
>         maximum iterations=10000, initial guess is zero
>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>         left preconditioning
>         using NONE norm type for convergence test
>       PC Object:      (mg_coarse_sub_)       1 MPI processes
>         type: lu
>           LU: out-of-place factorization
>           tolerance for zero pivot 2.22045e-14
>           using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>           matrix ordering: nd
>           factor fill ratio given 5., needed 2.3125
>             Factored matrix follows:
>               Mat Object:               1 MPI processes
>                 type: seqaij
>                 rows=32, cols=32
>                 package used to perform factorization: petsc
>                 total: nonzeros=370, allocated nonzeros=370
>                 total number of mallocs used during MatSetValues calls =0
>                   not using I-node routines
>         linear system matrix = precond matrix:
>         Mat Object:         1 MPI processes
>           type: seqaij
>           rows=32, cols=32
>           total: nonzeros=160, allocated nonzeros=160
>           total number of mallocs used during MatSetValues calls =0
>             not using I-node routines
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=512, cols=512
>         total: nonzeros=3200, allocated nonzeros=3200
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node (on process 0) routines
>   Down solver (pre-smoother) on level 1 -------------------------------
>     KSP Object:    (mg_levels_1_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.153005, max = 1.68306
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_1_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_1_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=4096, cols=4096
>         total: nonzeros=27136, allocated nonzeros=27136
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node (on process 0) routines
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   Down solver (pre-smoother) on level 2 -------------------------------
>     KSP Object:    (mg_levels_2_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.152793, max = 1.68072
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_2_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_2_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=32768, cols=32768
>         total: nonzeros=223232, allocated nonzeros=223232
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node (on process 0) routines
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   Down solver (pre-smoother) on level 3 -------------------------------
>     KSP Object:    (mg_levels_3_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.144705, max = 1.59176
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_3_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_3_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=262144, cols=262144
>         total: nonzeros=1810432, allocated nonzeros=1810432
>         total number of mallocs used during MatSetValues calls =0
>           not using I-node (on process 0) routines
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   Down solver (pre-smoother) on level 4 -------------------------------
>     KSP Object:    (mg_levels_4_)     16 MPI processes
>       type: chebyshev
>         Chebyshev: eigenvalue estimates:  min = 0.140039, max = 1.54043
>         Chebyshev: eigenvalues estimated using gmres with translations
>  [0. 0.1; 0. 1.1]
>         KSP Object:        (mg_levels_4_esteig_)         16 MPI processes
>           type: gmres
>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
> Orthogonalization with no iterative refinement
>             GMRES: happy breakdown tolerance 1e-30
>           maximum iterations=10, initial guess is zero
>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>           left preconditioning
>           using PRECONDITIONED norm type for convergence test
>       maximum iterations=2
>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>       left preconditioning
>       using nonzero initial guess
>       using NONE norm type for convergence test
>     PC Object:    (mg_levels_4_)     16 MPI processes
>       type: sor
>         SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1.
>       linear system matrix = precond matrix:
>       Mat Object:       16 MPI processes
>         type: mpiaij
>         rows=2097152, cols=2097152
>         total: nonzeros=14581760, allocated nonzeros=14581760
>         total number of mallocs used during MatSetValues calls =0
>   Up solver (post-smoother) same as down solver (pre-smoother)
>   linear system matrix = precond matrix:
>   Mat Object:   16 MPI processes
>     type: mpiaij
>     rows=2097152, cols=2097152
>     total: nonzeros=14581760, allocated nonzeros=14581760
>     total number of mallocs used during MatSetValues calls =0
> Residual 2 norm 0.0107114
> Residual infinity norm 6.84843e-05
>
> What are the differences between gamg and mg with -pc_mg_galerkin option
> (apart from the default smoother/coarse grid solver options, which I
> identified by comparing the ksp_view output)? Perhaps there’s an issue with
> the restriction, as you suggest?
>

Okay, when you say a Poisson problem, I assumed you meant

  div grad phi = f

However, now it appears that you have

  div D grad phi = f

Is this true? It would explain your results. Your coarse operator is
inaccurate. AMG makes the coarse operator directly
from the matrix, so it incorporates coefficient variation. Galerkin
projection makes the coarse operator using R A P
from your original operator A, and this is accurate enough to get good
convergence. So your coefficient representation
on the coarse levels is really bad. If you want to use GMG, you need to
figure out how to represent the coefficient on
coarser levels, which is sometimes called "renormalization".

   Matt


> Thanks!
>
>
>   Thanks,
>
>     Matt
>
>
>> $ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>> -ksp_view -ksp_monitor_true_residual -pc_type gamg -ksp_type cg
>> right hand side 2 norm: 512.
>> right hand side infinity norm: 0.999097
>> building operator with Dirichlet boundary conditions, global grid size:
>> 128 x 128 x 128
>>   0 KSP preconditioned resid norm 2.600515167901e+00 true resid norm
>> 5.120000000000e+02 ||r(i)||/||b|| 1.000000000000e+00
>>   1 KSP preconditioned resid norm 6.715532962879e-02 true resid norm
>> 7.578946422553e+02 ||r(i)||/||b|| 1.480262973155e+00
>>   2 KSP preconditioned resid norm 1.127682308441e-02 true resid norm
>> 3.247852182315e+01 ||r(i)||/||b|| 6.343461293584e-02
>>   3 KSP preconditioned resid norm 7.760468503025e-04 true resid norm
>> 3.304142895659e+00 ||r(i)||/||b|| 6.453404093085e-03
>>   4 KSP preconditioned resid norm 6.419777870067e-05 true resid norm
>> 2.662993775521e-01 ||r(i)||/||b|| 5.201159717815e-04
>>   5 KSP preconditioned resid norm 5.107540549482e-06 true resid norm
>> 2.309528369351e-02 ||r(i)||/||b|| 4.510797596388e-05
>> KSP Object: 16 MPI processes
>>   type: cg
>>   maximum iterations=10000, initial guess is zero
>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>   left preconditioning
>>   using PRECONDITIONED norm type for convergence test
>> PC Object: 16 MPI processes
>>   type: gamg
>>     MG: type is MULTIPLICATIVE, levels=5 cycles=v
>>       Cycles per PCApply=1
>>       Using Galerkin computed coarse grid matrices
>>       GAMG specific options
>>         Threshold for dropping small values from graph 0.
>>         AGG specific options
>>           Symmetric graph false
>>   Coarse grid solver -- level -------------------------------
>>     KSP Object:    (mg_coarse_)     16 MPI processes
>>       type: preonly
>>       maximum iterations=10000, initial guess is zero
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_coarse_)     16 MPI processes
>>       type: bjacobi
>>         block Jacobi: number of blocks = 16
>>         Local solve is same for all blocks, in the following KSP and PC
>> objects:
>>       KSP Object:      (mg_coarse_sub_)       1 MPI processes
>>         type: preonly
>>         maximum iterations=1, initial guess is zero
>>         tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>         left preconditioning
>>         using NONE norm type for convergence test
>>       PC Object:      (mg_coarse_sub_)       1 MPI processes
>>         type: lu
>>           LU: out-of-place factorization
>>           tolerance for zero pivot 2.22045e-14
>>           using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>>           matrix ordering: nd
>>           factor fill ratio given 5., needed 1.
>>             Factored matrix follows:
>>               Mat Object:               1 MPI processes
>>                 type: seqaij
>>                 rows=13, cols=13
>>                 package used to perform factorization: petsc
>>                 total: nonzeros=169, allocated nonzeros=169
>>                 total number of mallocs used during MatSetValues calls =0
>>                   using I-node routines: found 3 nodes, limit used is 5
>>         linear system matrix = precond matrix:
>>         Mat Object:         1 MPI processes
>>           type: seqaij
>>           rows=13, cols=13
>>           total: nonzeros=169, allocated nonzeros=169
>>           total number of mallocs used during MatSetValues calls =0
>>             using I-node routines: found 3 nodes, limit used is 5
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=13, cols=13
>>         total: nonzeros=169, allocated nonzeros=169
>>         total number of mallocs used during MatSetValues calls =0
>>           using I-node (on process 0) routines: found 3 nodes, limit
>> used is 5
>>   Down solver (pre-smoother) on level 1 -------------------------------
>>     KSP Object:    (mg_levels_1_)     16 MPI processes
>>       type: chebyshev
>>         Chebyshev: eigenvalue estimates:  min = 0.136516, max = 1.50168
>>         Chebyshev: eigenvalues estimated using gmres with translations
>>  [0. 0.1; 0. 1.1]
>>         KSP Object:        (mg_levels_1_esteig_)         16 MPI processes
>>           type: gmres
>>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>>             GMRES: happy breakdown tolerance 1e-30
>>           maximum iterations=10, initial guess is zero
>>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>           left preconditioning
>>           using PRECONDITIONED norm type for convergence test
>>       maximum iterations=2
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_1_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=467, cols=467
>>         total: nonzeros=68689, allocated nonzeros=68689
>>         total number of mallocs used during MatSetValues calls =0
>>           not using I-node (on process 0) routines
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   Down solver (pre-smoother) on level 2 -------------------------------
>>     KSP Object:    (mg_levels_2_)     16 MPI processes
>>       type: chebyshev
>>         Chebyshev: eigenvalue estimates:  min = 0.148872, max = 1.63759
>>         Chebyshev: eigenvalues estimated using gmres with translations
>>  [0. 0.1; 0. 1.1]
>>         KSP Object:        (mg_levels_2_esteig_)         16 MPI processes
>>           type: gmres
>>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>>             GMRES: happy breakdown tolerance 1e-30
>>           maximum iterations=10, initial guess is zero
>>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>           left preconditioning
>>           using PRECONDITIONED norm type for convergence test
>>       maximum iterations=2
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_2_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=14893, cols=14893
>>         total: nonzeros=1856839, allocated nonzeros=1856839
>>         total number of mallocs used during MatSetValues calls =0
>>           not using I-node (on process 0) routines
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   Down solver (pre-smoother) on level 3 -------------------------------
>>     KSP Object:    (mg_levels_3_)     16 MPI processes
>>       type: chebyshev
>>         Chebyshev: eigenvalue estimates:  min = 0.135736, max = 1.49309
>>         Chebyshev: eigenvalues estimated using gmres with translations
>>  [0. 0.1; 0. 1.1]
>>         KSP Object:        (mg_levels_3_esteig_)         16 MPI processes
>>           type: gmres
>>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>>             GMRES: happy breakdown tolerance 1e-30
>>           maximum iterations=10, initial guess is zero
>>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>           left preconditioning
>>           using PRECONDITIONED norm type for convergence test
>>       maximum iterations=2
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_3_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=190701, cols=190701
>>         total: nonzeros=6209261, allocated nonzeros=6209261
>>         total number of mallocs used during MatSetValues calls =0
>>           not using I-node (on process 0) routines
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   Down solver (pre-smoother) on level 4 -------------------------------
>>     KSP Object:    (mg_levels_4_)     16 MPI processes
>>       type: chebyshev
>>         Chebyshev: eigenvalue estimates:  min = 0.140039, max = 1.54043
>>         Chebyshev: eigenvalues estimated using gmres with translations
>>  [0. 0.1; 0. 1.1]
>>         KSP Object:        (mg_levels_4_esteig_)         16 MPI processes
>>           type: gmres
>>             GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
>> Orthogonalization with no iterative refinement
>>             GMRES: happy breakdown tolerance 1e-30
>>           maximum iterations=10, initial guess is zero
>>           tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>           left preconditioning
>>           using PRECONDITIONED norm type for convergence test
>>       maximum iterations=2
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_4_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=2097152, cols=2097152
>>         total: nonzeros=14581760, allocated nonzeros=14581760
>>         total number of mallocs used during MatSetValues calls =0
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   linear system matrix = precond matrix:
>>   Mat Object:   16 MPI processes
>>     type: mpiaij
>>     rows=2097152, cols=2097152
>>     total: nonzeros=14581760, allocated nonzeros=14581760
>>     total number of mallocs used during MatSetValues calls =0
>> Residual 2 norm 0.0230953
>> Residual infinity norm 0.000240174
>>
>>
>>
>> $ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>> -ksp_view -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels
>> 5 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_self_scale
>> -mg_levels_ksp_max_it 5
>> right hand side 2 norm: 512.
>> right hand side infinity norm: 0.999097
>> building operator with Dirichlet boundary conditions, global grid size:
>> 128 x 128 x 128
>> building operator with Dirichlet boundary conditions, global grid size:
>> 16 x 16 x 16
>> building operator with Dirichlet boundary conditions, global grid size:
>> 32 x 32 x 32
>> building operator with Dirichlet boundary conditions, global grid size:
>> 64 x 64 x 64
>> building operator with Dirichlet boundary conditions, global grid size: 8
>> x 8 x 8
>>   0 KSP preconditioned resid norm 1.957390963372e+03 true resid norm
>> 5.120000000000e+02 ||r(i)||/||b|| 1.000000000000e+00
>>   1 KSP preconditioned resid norm 7.501162328351e+02 true resid norm
>> 3.373318498950e+02 ||r(i)||/||b|| 6.588512693262e-01
>>   2 KSP preconditioned resid norm 7.658993705113e+01 true resid norm
>> 1.827365322620e+02 ||r(i)||/||b|| 3.569072895742e-01
>>   3 KSP preconditioned resid norm 9.059824824329e+02 true resid norm
>> 1.426474831278e+02 ||r(i)||/||b|| 2.786083654840e-01
>>   4 KSP preconditioned resid norm 4.091168582134e+02 true resid norm
>> 1.292495057977e+02 ||r(i)||/||b|| 2.524404410112e-01
>>   5 KSP preconditioned resid norm 7.422110759274e+01 true resid norm
>> 1.258028404461e+02 ||r(i)||/||b|| 2.457086727463e-01
>>   6 KSP preconditioned resid norm 4.619015396949e+01 true resid norm
>> 1.213792421102e+02 ||r(i)||/||b|| 2.370688322464e-01
>>   7 KSP preconditioned resid norm 6.391009527793e+01 true resid norm
>> 1.124510270422e+02 ||r(i)||/||b|| 2.196309121917e-01
>>   8 KSP preconditioned resid norm 7.446926604265e+01 true resid norm
>> 1.077567310933e+02 ||r(i)||/||b|| 2.104623654166e-01
>>   9 KSP preconditioned resid norm 4.220904319642e+01 true resid norm
>> 9.988181971539e+01 ||r(i)||/||b|| 1.950816791316e-01
>>  10 KSP preconditioned resid norm 2.394387980018e+01 true resid norm
>> 9.127579669592e+01 ||r(i)||/||b|| 1.782730404217e-01
>>  11 KSP preconditioned resid norm 1.360843954226e+01 true resid norm
>> 8.771762326371e+01 ||r(i)||/||b|| 1.713234829369e-01
>>  12 KSP preconditioned resid norm 4.128223286694e+01 true resid norm
>> 8.529182941649e+01 ||r(i)||/||b|| 1.665856043291e-01
>>  13 KSP preconditioned resid norm 2.183532094447e+01 true resid norm
>> 8.263211340769e+01 ||r(i)||/||b|| 1.613908464994e-01
>>  14 KSP preconditioned resid norm 1.304178992338e+01 true resid norm
>> 7.971822602122e+01 ||r(i)||/||b|| 1.556996601977e-01
>>  15 KSP preconditioned resid norm 7.573349141411e+00 true resid norm
>> 7.520975377445e+01 ||r(i)||/||b|| 1.468940503407e-01
>>  16 KSP preconditioned resid norm 9.314890793459e+00 true resid norm
>> 7.304954328407e+01 ||r(i)||/||b|| 1.426748892267e-01
>>  17 KSP preconditioned resid norm 4.445933446231e+00 true resid norm
>> 6.978356031428e+01 ||r(i)||/||b|| 1.362960162388e-01
>>  18 KSP preconditioned resid norm 5.349719054065e+00 true resid norm
>> 6.667516877214e+01 ||r(i)||/||b|| 1.302249390081e-01
>>  19 KSP preconditioned resid norm 3.295861671942e+00 true resid norm
>> 6.182140339659e+01 ||r(i)||/||b|| 1.207449285090e-01
>>  20 KSP preconditioned resid norm 1.035616277789e+01 true resid norm
>> 5.734720030036e+01 ||r(i)||/||b|| 1.120062505866e-01
>>  21 KSP preconditioned resid norm 3.211186072853e+01 true resid norm
>> 5.552393909940e+01 ||r(i)||/||b|| 1.084451935535e-01
>>  22 KSP preconditioned resid norm 1.305589450595e+01 true resid norm
>> 5.499062776214e+01 ||r(i)||/||b|| 1.074035698479e-01
>>  23 KSP preconditioned resid norm 2.686432456763e+00 true resid norm
>> 5.207613218582e+01 ||r(i)||/||b|| 1.017111956754e-01
>>  24 KSP preconditioned resid norm 2.824784197849e+00 true resid norm
>> 4.838619801451e+01 ||r(i)||/||b|| 9.450429299708e-02
>>  25 KSP preconditioned resid norm 1.071690618667e+00 true resid norm
>> 4.607851421273e+01 ||r(i)||/||b|| 8.999709807174e-02
>>  26 KSP preconditioned resid norm 1.881879145107e+00 true resid norm
>> 4.001593265961e+01 ||r(i)||/||b|| 7.815611847581e-02
>>  27 KSP preconditioned resid norm 1.572862295402e+00 true resid norm
>> 3.838282973517e+01 ||r(i)||/||b|| 7.496646432650e-02
>>  28 KSP preconditioned resid norm 1.470751639074e+00 true resid norm
>> 3.480847634691e+01 ||r(i)||/||b|| 6.798530536506e-02
>>  29 KSP preconditioned resid norm 1.024975253805e+01 true resid norm
>> 3.242161363347e+01 ||r(i)||/||b|| 6.332346412788e-02
>>  30 KSP preconditioned resid norm 2.548780607710e+00 true resid norm
>> 3.146609403253e+01 ||r(i)||/||b|| 6.145721490728e-02
>>  31 KSP preconditioned resid norm 1.560691471465e+00 true resid norm
>> 2.970265802267e+01 ||r(i)||/||b|| 5.801300395052e-02
>>  32 KSP preconditioned resid norm 2.596714997356e+00 true resid norm
>> 2.766969046763e+01 ||r(i)||/||b|| 5.404236419458e-02
>>  33 KSP preconditioned resid norm 7.034818331385e+00 true resid norm
>> 2.684572557056e+01 ||r(i)||/||b|| 5.243305775501e-02
>>  34 KSP preconditioned resid norm 1.494072683898e+00 true resid norm
>> 2.475430030960e+01 ||r(i)||/||b|| 4.834824279219e-02
>>  35 KSP preconditioned resid norm 2.080781323538e+01 true resid norm
>> 2.334859550417e+01 ||r(i)||/||b|| 4.560272559409e-02
>>  36 KSP preconditioned resid norm 2.046655096031e+00 true resid norm
>> 2.240354154839e+01 ||r(i)||/||b|| 4.375691708669e-02
>>  37 KSP preconditioned resid norm 7.606846976760e-01 true resid norm
>> 2.109556419574e+01 ||r(i)||/||b|| 4.120227381981e-02
>>  38 KSP preconditioned resid norm 2.521301363193e+00 true resid norm
>> 1.843497075964e+01 ||r(i)||/||b|| 3.600580226493e-02
>>  39 KSP preconditioned resid norm 3.726976470079e+00 true resid norm
>> 1.794209917279e+01 ||r(i)||/||b|| 3.504316244686e-02
>>  40 KSP preconditioned resid norm 8.959884762705e-01 true resid norm
>> 1.573137783532e+01 ||r(i)||/||b|| 3.072534733461e-02
>>  41 KSP preconditioned resid norm 1.227682448888e+00 true resid norm
>> 1.501346415860e+01 ||r(i)||/||b|| 2.932317218476e-02
>>  42 KSP preconditioned resid norm 1.452770736534e+00 true resid norm
>> 1.433942919922e+01 ||r(i)||/||b|| 2.800669765473e-02
>>  43 KSP preconditioned resid norm 5.675352390898e-01 true resid norm
>> 1.216437815936e+01 ||r(i)||/||b|| 2.375855109250e-02
>>  44 KSP preconditioned resid norm 4.949409351772e-01 true resid norm
>> 1.042812110399e+01 ||r(i)||/||b|| 2.036742403123e-02
>>  45 KSP preconditioned resid norm 2.002853875915e+00 true resid norm
>> 9.309183650084e+00 ||r(i)||/||b|| 1.818199931657e-02
>>  46 KSP preconditioned resid norm 3.745525627399e-01 true resid norm
>> 8.522457639380e+00 ||r(i)||/||b|| 1.664542507691e-02
>>  47 KSP preconditioned resid norm 1.811694613170e-01 true resid norm
>> 7.531206553361e+00 ||r(i)||/||b|| 1.470938779953e-02
>>  48 KSP preconditioned resid norm 1.782171623447e+00 true resid norm
>> 6.764441307706e+00 ||r(i)||/||b|| 1.321179942911e-02
>>  49 KSP preconditioned resid norm 2.299828236176e+00 true resid norm
>> 6.702407994976e+00 ||r(i)||/||b|| 1.309064061519e-02
>>  50 KSP preconditioned resid norm 1.273834849543e+00 true resid norm
>> 6.053797247633e+00 ||r(i)||/||b|| 1.182382274928e-02
>>  51 KSP preconditioned resid norm 2.719578737249e-01 true resid norm
>> 5.470925517497e+00 ||r(i)||/||b|| 1.068540140136e-02
>>  52 KSP preconditioned resid norm 4.663757145206e-01 true resid norm
>> 5.005785517882e+00 ||r(i)||/||b|| 9.776924839614e-03
>>  53 KSP preconditioned resid norm 1.292565284376e+00 true resid norm
>> 4.881780753946e+00 ||r(i)||/||b|| 9.534728035050e-03
>>  54 KSP preconditioned resid norm 1.867369610632e-01 true resid norm
>> 4.496564950399e+00 ||r(i)||/||b|| 8.782353418749e-03
>>  55 KSP preconditioned resid norm 5.249392115789e-01 true resid norm
>> 4.092757959067e+00 ||r(i)||/||b|| 7.993667888803e-03
>>  56 KSP preconditioned resid norm 1.924525961621e-01 true resid norm
>> 3.780501481010e+00 ||r(i)||/||b|| 7.383791955098e-03
>>  57 KSP preconditioned resid norm 5.779420386829e-01 true resid norm
>> 3.213189014725e+00 ||r(i)||/||b|| 6.275759794385e-03
>>  58 KSP preconditioned resid norm 5.955339076981e-01 true resid norm
>> 3.112032435949e+00 ||r(i)||/||b|| 6.078188351463e-03
>>  59 KSP preconditioned resid norm 3.750139060970e-01 true resid norm
>> 2.999193364090e+00 ||r(i)||/||b|| 5.857799539239e-03
>>  60 KSP preconditioned resid norm 1.384679712935e-01 true resid norm
>> 2.745891157615e+00 ||r(i)||/||b|| 5.363068667216e-03
>>  61 KSP preconditioned resid norm 7.632834890339e-02 true resid norm
>> 2.176299405671e+00 ||r(i)||/||b|| 4.250584776702e-03
>>  62 KSP preconditioned resid norm 3.147491994853e-01 true resid norm
>> 1.832893972188e+00 ||r(i)||/||b|| 3.579871039430e-03
>>  63 KSP preconditioned resid norm 5.052243308649e-01 true resid norm
>> 1.775115122392e+00 ||r(i)||/||b|| 3.467021723421e-03
>>  64 KSP preconditioned resid norm 8.956523831283e-01 true resid norm
>> 1.731441975933e+00 ||r(i)||/||b|| 3.381722609244e-03
>>  65 KSP preconditioned resid norm 7.897527588669e-01 true resid norm
>> 1.682654829619e+00 ||r(i)||/||b|| 3.286435214100e-03
>>  66 KSP preconditioned resid norm 5.770941160165e-02 true resid norm
>> 1.560734518349e+00 ||r(i)||/||b|| 3.048309606150e-03
>>  67 KSP preconditioned resid norm 3.553024960194e-02 true resid norm
>> 1.389699750667e+00 ||r(i)||/||b|| 2.714257325521e-03
>>  68 KSP preconditioned resid norm 4.316233667769e-02 true resid norm
>> 1.147051776028e+00 ||r(i)||/||b|| 2.240335500054e-03
>>  69 KSP preconditioned resid norm 3.793691994632e-02 true resid norm
>> 1.012385825627e+00 ||r(i)||/||b|| 1.977316065678e-03
>>  70 KSP preconditioned resid norm 2.383460701011e-02 true resid norm
>> 8.696480161436e-01 ||r(i)||/||b|| 1.698531281530e-03
>>  71 KSP preconditioned resid norm 6.376655007996e-02 true resid norm
>> 7.779779636534e-01 ||r(i)||/||b|| 1.519488210261e-03
>>  72 KSP preconditioned resid norm 5.714768085413e-02 true resid norm
>> 7.153671793501e-01 ||r(i)||/||b|| 1.397201522168e-03
>>  73 KSP preconditioned resid norm 1.708395350387e-01 true resid norm
>> 6.312992319936e-01 ||r(i)||/||b|| 1.233006312487e-03
>>  74 KSP preconditioned resid norm 1.498516783452e-01 true resid norm
>> 6.006527781743e-01 ||r(i)||/||b|| 1.173149957372e-03
>>  75 KSP preconditioned resid norm 1.218071938641e-01 true resid norm
>> 5.769463903876e-01 ||r(i)||/||b|| 1.126848418726e-03
>>  76 KSP preconditioned resid norm 2.682030144251e-02 true resid norm
>> 5.214035118381e-01 ||r(i)||/||b|| 1.018366234059e-03
>>  77 KSP preconditioned resid norm 9.794744927328e-02 true resid norm
>> 4.660318995939e-01 ||r(i)||/||b|| 9.102185538943e-04
>>  78 KSP preconditioned resid norm 3.311394355245e-01 true resid norm
>> 4.581129176231e-01 ||r(i)||/||b|| 8.947517922325e-04
>>  79 KSP preconditioned resid norm 7.771705063438e-02 true resid norm
>> 4.103510898511e-01 ||r(i)||/||b|| 8.014669723654e-04
>>  80 KSP preconditioned resid norm 3.078123608908e-02 true resid norm
>> 3.918493012988e-01 ||r(i)||/||b|| 7.653306665991e-04
>>  81 KSP preconditioned resid norm 2.759088686744e-02 true resid norm
>> 3.289360804743e-01 ||r(i)||/||b|| 6.424532821763e-04
>>  82 KSP preconditioned resid norm 1.147671489846e-01 true resid norm
>> 3.190902200515e-01 ||r(i)||/||b|| 6.232230860381e-04
>>  83 KSP preconditioned resid norm 1.101306468440e-02 true resid norm
>> 2.900815313985e-01 ||r(i)||/||b|| 5.665654910126e-04
>> KSP Object: 16 MPI processes
>>   type: cg
>>   maximum iterations=10000, initial guess is zero
>>   tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>   left preconditioning
>>   using PRECONDITIONED norm type for convergence test
>> PC Object: 16 MPI processes
>>   type: mg
>>     MG: type is MULTIPLICATIVE, levels=5 cycles=v
>>       Cycles per PCApply=1
>>       Not using Galerkin computed coarse grid matrices
>>   Coarse grid solver -- level -------------------------------
>>     KSP Object:    (mg_coarse_)     16 MPI processes
>>       type: preonly
>>       maximum iterations=10000, initial guess is zero
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_coarse_)     16 MPI processes
>>       type: redundant
>>         Redundant preconditioner: First (color=0) of 16 PCs follows
>>         KSP Object:        (mg_coarse_redundant_)         1 MPI processes
>>           type: preonly
>>           maximum iterations=10000, initial guess is zero
>>           tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>           left preconditioning
>>           using NONE norm type for convergence test
>>         PC Object:        (mg_coarse_redundant_)         1 MPI processes
>>           type: lu
>>             LU: out-of-place factorization
>>             tolerance for zero pivot 2.22045e-14
>>             using diagonal shift on blocks to prevent zero pivot
>> [INBLOCKS]
>>             matrix ordering: nd
>>             factor fill ratio given 5., needed 7.56438
>>               Factored matrix follows:
>>                 Mat Object:                 1 MPI processes
>>                   type: seqaij
>>                   rows=512, cols=512
>>                   package used to perform factorization: petsc
>>                   total: nonzeros=24206, allocated nonzeros=24206
>>                   total number of mallocs used during MatSetValues calls
>> =0
>>                     not using I-node routines
>>           linear system matrix = precond matrix:
>>           Mat Object:           1 MPI processes
>>             type: seqaij
>>             rows=512, cols=512
>>             total: nonzeros=3200, allocated nonzeros=3200
>>             total number of mallocs used during MatSetValues calls =0
>>               not using I-node routines
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=512, cols=512
>>         total: nonzeros=3200, allocated nonzeros=3200
>>         total number of mallocs used during MatSetValues calls =0
>>   Down solver (pre-smoother) on level 1 -------------------------------
>>     KSP Object:    (mg_levels_1_)     16 MPI processes
>>       type: richardson
>>         Richardson: using self-scale best computed damping factor
>>       maximum iterations=5
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_1_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=4096, cols=4096
>>         total: nonzeros=27136, allocated nonzeros=27136
>>         total number of mallocs used during MatSetValues calls =0
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   Down solver (pre-smoother) on level 2 -------------------------------
>>     KSP Object:    (mg_levels_2_)     16 MPI processes
>>       type: richardson
>>         Richardson: using self-scale best computed damping factor
>>       maximum iterations=5
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_2_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=32768, cols=32768
>>         total: nonzeros=223232, allocated nonzeros=223232
>>         total number of mallocs used during MatSetValues calls =0
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   Down solver (pre-smoother) on level 3 -------------------------------
>>     KSP Object:    (mg_levels_3_)     16 MPI processes
>>       type: richardson
>>         Richardson: using self-scale best computed damping factor
>>       maximum iterations=5
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_3_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=262144, cols=262144
>>         total: nonzeros=1810432, allocated nonzeros=1810432
>>         total number of mallocs used during MatSetValues calls =0
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   Down solver (pre-smoother) on level 4 -------------------------------
>>     KSP Object:    (mg_levels_4_)     16 MPI processes
>>       type: richardson
>>         Richardson: using self-scale best computed damping factor
>>       maximum iterations=5
>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>       left preconditioning
>>       using nonzero initial guess
>>       using NONE norm type for convergence test
>>     PC Object:    (mg_levels_4_)     16 MPI processes
>>       type: sor
>>         SOR: type = local_symmetric, iterations = 1, local iterations =
>> 1, omega = 1.
>>       linear system matrix = precond matrix:
>>       Mat Object:       16 MPI processes
>>         type: mpiaij
>>         rows=2097152, cols=2097152
>>         total: nonzeros=14581760, allocated nonzeros=14581760
>>         total number of mallocs used during MatSetValues calls =0
>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>   linear system matrix = precond matrix:
>>   Mat Object:   16 MPI processes
>>     type: mpiaij
>>     rows=2097152, cols=2097152
>>     total: nonzeros=14581760, allocated nonzeros=14581760
>>     total number of mallocs used during MatSetValues calls =0
>> Residual 2 norm 0.290082
>> Residual infinity norm 0.00192869
>>
>>
>>
>>
>>
>> solver_test.c:
>>
>> // modified version of ksp/ksp/examples/tutorials/ex34.c
>> // related: ksp/ksp/examples/tutorials/ex29.c
>> //          ksp/ksp/examples/tutorials/ex32.c
>> //          ksp/ksp/examples/tutorials/ex50.c
>>
>> #include <petscdm.h>
>> #include <petscdmda.h>
>> #include <petscksp.h>
>>
>> extern PetscErrorCode ComputeMatrix(KSP,Mat,Mat,void*);
>> extern PetscErrorCode ComputeRHS(KSP,Vec,void*);
>>
>> typedef enum
>> {
>>     DIRICHLET,
>>     NEUMANN
>> } BCType;
>>
>> #undef __FUNCT__
>> #define __FUNCT__ "main"
>> int main(int argc,char **argv)
>> {
>>     KSP            ksp;
>>     DM             da;
>>     PetscReal      norm;
>>     PetscErrorCode ierr;
>>
>>     PetscInt    i,j,k,mx,my,mz,xm,ym,zm,xs,ys,zs;
>>     PetscScalar Hx,Hy,Hz;
>>     PetscScalar ***array;
>>     Vec         x,b,r;
>>     Mat         J;
>>     const char* bcTypes[2] = { "dirichlet", "neumann" };
>>     PetscInt    bcType = (PetscInt)DIRICHLET;
>>
>>     PetscInitialize(&argc,&argv,(char*)0,0);
>>
>>     ierr = PetscOptionsBegin(PETSC_COMM_WORLD, "", "", "");CHKERRQ(ierr);
>>     ierr = PetscOptionsEList("-bc_type", "Type of boundary condition",
>> "", bcTypes, 2, bcTypes[0], &bcType, NULL);CHKERRQ(ierr);
>>     ierr = PetscOptionsEnd();CHKERRQ(ierr);
>>
>>     ierr = KSPCreate(PETSC_COMM_WORLD,&ksp);CHKERRQ(ierr);
>>     ierr = DMDACreate3d(PETSC_COMM_WORLD,DM_BOUNDARY_NONE,DM_BOUNDARY_N
>> ONE,DM_BOUNDARY_NONE,DMDA_STENCIL_STAR,-12,-12,-12,PETSC_
>> DECIDE,PETSC_DECIDE,PETSC_DECIDE,1,1,0,0,0,&da);CHKERRQ(ierr);
>>     ierr = DMDASetInterpolationType(da, DMDA_Q0);CHKERRQ(ierr);
>>
>>     ierr = KSPSetDM(ksp,da);CHKERRQ(ierr);
>>
>>     ierr = KSPSetComputeRHS(ksp,ComputeRHS,&bcType);CHKERRQ(ierr);
>>     ierr = KSPSetComputeOperators(ksp,ComputeMatrix,&bcType);CHKERRQ(
>> ierr);
>>     ierr = KSPSetFromOptions(ksp);CHKERRQ(ierr);
>>     ierr = KSPSolve(ksp,NULL,NULL);CHKERRQ(ierr);
>>     ierr = KSPGetSolution(ksp,&x);CHKERRQ(ierr);
>>     ierr = KSPGetRhs(ksp,&b);CHKERRQ(ierr);
>>     ierr = KSPGetOperators(ksp,NULL,&J);CHKERRQ(ierr);
>>     ierr = VecDuplicate(b,&r);CHKERRQ(ierr);
>>
>>     ierr = MatMult(J,x,r);CHKERRQ(ierr);
>>     ierr = VecAYPX(r,-1.0,b);CHKERRQ(ierr);
>>     ierr = VecNorm(r,NORM_2,&norm);CHKERRQ(ierr);
>>     ierr = PetscPrintf(PETSC_COMM_WORLD,"Residual 2 norm
>> %g\n",(double)norm);CHKERRQ(ierr);
>>     ierr = VecNorm(r,NORM_INFINITY,&norm);CHKERRQ(ierr);
>>     ierr = PetscPrintf(PETSC_COMM_WORLD,"Residual infinity norm
>> %g\n",(double)norm);CHKERRQ(ierr);
>>
>>     ierr = VecDestroy(&r);CHKERRQ(ierr);
>>     ierr = KSPDestroy(&ksp);CHKERRQ(ierr);
>>     ierr = DMDestroy(&da);CHKERRQ(ierr);
>>     ierr = PetscFinalize();
>>     return 0;
>> }
>>
>> #undef __FUNCT__
>> #define __FUNCT__ "ComputeRHS"
>> PetscErrorCode ComputeRHS(KSP ksp,Vec b,void *ctx)
>> {
>>     PetscErrorCode ierr;
>>     PetscInt       i,j,k,mx,my,mz,xm,ym,zm,xs,ys,zs;
>>     PetscScalar    Hx,Hy,Hz;
>>     PetscScalar    ***array;
>>     DM             da;
>>     BCType bcType = *(BCType*)ctx;
>>
>>     PetscFunctionBeginUser;
>>     ierr = KSPGetDM(ksp,&da);CHKERRQ(ierr);
>>     ierr = DMDAGetInfo(da, 0, &mx, &my, &mz,
>> 0,0,0,0,0,0,0,0,0);CHKERRQ(ierr);
>>     Hx   = 1.0 / (PetscReal)(mx);
>>     Hy   = 1.0 / (PetscReal)(my);
>>     Hz   = 1.0 / (PetscReal)(mz);
>>     ierr = DMDAGetCorners(da,&xs,&ys,&zs,&xm,&ym,&zm);CHKERRQ(ierr);
>>     ierr = DMDAVecGetArray(da, b, &array);CHKERRQ(ierr);
>>     for (k = zs; k < zs + zm; k++)
>>     {
>>         for (j = ys; j < ys + ym; j++)
>>         {
>>             for (i = xs; i < xs + xm; i++)
>>             {
>>                 PetscReal x = ((PetscReal)i + 0.5) * Hx;
>>                 PetscReal y = ((PetscReal)j + 0.5) * Hy;
>>                 PetscReal z = ((PetscReal)k + 0.5) * Hz;
>>                 array[k][j][i] = PetscSinReal(x * 2.0 * PETSC_PI) *
>> PetscCosReal(y * 2.0 * PETSC_PI) * PetscSinReal(z * 2.0 * PETSC_PI);
>>             }
>>         }
>>     }
>>     ierr = DMDAVecRestoreArray(da, b, &array);CHKERRQ(ierr);
>>     ierr = VecAssemblyBegin(b);CHKERRQ(ierr);
>>     ierr = VecAssemblyEnd(b);CHKERRQ(ierr);
>>
>>     PetscReal norm;
>>     VecNorm(b, NORM_2, &norm);
>>     PetscPrintf(PETSC_COMM_WORLD, "right hand side 2 norm: %g\n",
>> (double)norm);
>>     VecNorm(b, NORM_INFINITY, &norm);
>>     PetscPrintf(PETSC_COMM_WORLD, "right hand side infinity norm: %g\n",
>> (double)norm);
>>
>>     /* force right hand side to be consistent for singular matrix */
>>     /* note this is really a hack, normally the model would provide you
>> with a consistent right handside */
>>
>>     if (bcType == NEUMANN)
>>     {
>>         MatNullSpace nullspace;
>>         ierr = MatNullSpaceCreate(PETSC_COMM_
>> WORLD,PETSC_TRUE,0,0,&nullspace);CHKERRQ(ierr);
>>         ierr = MatNullSpaceRemove(nullspace,b);CHKERRQ(ierr);
>>         ierr = MatNullSpaceDestroy(&nullspace);CHKERRQ(ierr);
>>     }
>>     PetscFunctionReturn(0);
>> }
>>
>>
>> #undef __FUNCT__
>> #define __FUNCT__ "ComputeMatrix"
>> PetscErrorCode ComputeMatrix(KSP ksp, Mat J,Mat jac, void *ctx)
>> {
>>     PetscErrorCode ierr;
>>     PetscInt       i,j,k,mx,my,mz,xm,ym,zm,xs,ys,zs,num, numi, numj,
>> numk;
>>     PetscScalar    v[7],Hx,Hy,Hz;
>>     MatStencil     row, col[7];
>>     DM             da;
>>     BCType bcType = *(BCType*)ctx;
>>
>>     PetscFunctionBeginUser;
>>
>>     if (bcType == DIRICHLET)
>>         PetscPrintf(PETSC_COMM_WORLD, "building operator with Dirichlet
>> boundary conditions, ");
>>     else if (bcType == NEUMANN)
>>         PetscPrintf(PETSC_COMM_WORLD, "building operator with Neumann
>> boundary conditions, ");
>>     else
>>         SETERRQ(PETSC_COMM_WORLD, PETSC_ERR_SUP, "unrecognized boundary
>> condition type\n");
>>
>>     ierr    = KSPGetDM(ksp,&da);CHKERRQ(ierr);
>>     ierr    = DMDAGetInfo(da,0,&mx,&my,&mz,0
>> ,0,0,0,0,0,0,0,0);CHKERRQ(ierr);
>>
>>     PetscPrintf(PETSC_COMM_WORLD, "global grid size: %d x %d x %d\n",
>> mx, my, mz);
>>
>>     Hx      = 1.0 / (PetscReal)(mx);
>>     Hy      = 1.0 / (PetscReal)(my);
>>     Hz      = 1.0 / (PetscReal)(mz);
>>
>>     PetscReal Hx2 = Hx * Hx;
>>     PetscReal Hy2 = Hy * Hy;
>>     PetscReal Hz2 = Hz * Hz;
>>
>>     PetscReal scaleX = 1.0 / Hx2;
>>     PetscReal scaleY = 1.0 / Hy2;
>>     PetscReal scaleZ = 1.0 / Hz2;
>>
>>     ierr    = DMDAGetCorners(da,&xs,&ys,&zs,&xm,&ym,&zm);CHKERRQ(ierr);
>>     for (k = zs; k < zs + zm; k++)
>>     {
>>         for (j = ys; j < ys + ym; j++)
>>         {
>>             for (i = xs; i < xs + xm; i++)
>>             {
>>                 row.i = i;
>>                 row.j = j;
>>                 row.k = k;
>>                 if (i == 0 || j == 0 || k == 0 || i == mx - 1 || j == my
>> - 1 || k == mz - 1)
>>                 {
>>                     num = 0;
>>                     numi = 0;
>>                     numj = 0;
>>                     numk = 0;
>>                     if (k != 0)
>>                     {
>>                         v[num] = -scaleZ;
>>                         col[num].i = i;
>>                         col[num].j = j;
>>                         col[num].k = k - 1;
>>                         num++;
>>                         numk++;
>>                     }
>>                     if (j != 0)
>>                     {
>>                         v[num] = -scaleY;
>>                         col[num].i = i;
>>                         col[num].j = j - 1;
>>                         col[num].k = k;
>>                         num++;
>>                         numj++;
>>                     }
>>                     if (i != 0)
>>                     {
>>                         v[num] = -scaleX;
>>                         col[num].i = i - 1;
>>                         col[num].j = j;
>>                         col[num].k = k;
>>                         num++;
>>                         numi++;
>>                     }
>>                     if (i != mx - 1)
>>                     {
>>                         v[num] = -scaleX;
>>                         col[num].i = i + 1;
>>                         col[num].j = j;
>>                         col[num].k = k;
>>                         num++;
>>                         numi++;
>>                     }
>>                     if (j != my - 1)
>>                     {
>>                         v[num] = -scaleY;
>>                         col[num].i = i;
>>                         col[num].j = j + 1;
>>                         col[num].k = k;
>>                         num++;
>>                         numj++;
>>                     }
>>                     if (k != mz - 1)
>>                     {
>>                         v[num] = -scaleZ;
>>                         col[num].i = i;
>>                         col[num].j = j;
>>                         col[num].k = k + 1;
>>                         num++;
>>                         numk++;
>>                     }
>>
>>                     if (bcType == NEUMANN)
>>                     {
>>                         v[num] = (PetscReal) (numk) * scaleZ +
>> (PetscReal) (numj) * scaleY + (PetscReal) (numi) * scaleX;
>>                     }
>>                     else if (bcType == DIRICHLET)
>>                     {
>>                         v[num] = 2.0 * (scaleX + scaleY + scaleZ);
>>                     }
>>
>>                     col[num].i = i;
>>                     col[num].j = j;
>>                     col[num].k = k;
>>                     num++;
>>                     ierr = MatSetValuesStencil(jac, 1, &row, num, col,
>> v, INSERT_VALUES);
>>                     CHKERRQ(ierr);
>>                 }
>>                 else
>>                 {
>>                     v[0] = -scaleZ;
>>                     col[0].i = i;
>>                     col[0].j = j;
>>                     col[0].k = k - 1;
>>                     v[1] = -scaleY;
>>                     col[1].i = i;
>>                     col[1].j = j - 1;
>>                     col[1].k = k;
>>                     v[2] = -scaleX;
>>                     col[2].i = i - 1;
>>                     col[2].j = j;
>>                     col[2].k = k;
>>                     v[3] = 2.0 * (scaleX + scaleY + scaleZ);
>>                     col[3].i = i;
>>                     col[3].j = j;
>>                     col[3].k = k;
>>                     v[4] = -scaleX;
>>                     col[4].i = i + 1;
>>                     col[4].j = j;
>>                     col[4].k = k;
>>                     v[5] = -scaleY;
>>                     col[5].i = i;
>>                     col[5].j = j + 1;
>>                     col[5].k = k;
>>                     v[6] = -scaleZ;
>>                     col[6].i = i;
>>                     col[6].j = j;
>>                     col[6].k = k + 1;
>>                     ierr = MatSetValuesStencil(jac, 1, &row, 7, col, v,
>> INSERT_VALUES);
>>                     CHKERRQ(ierr);
>>                 }
>>             }
>>         }
>>     }
>>     ierr = MatAssemblyBegin(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
>>     ierr = MatAssemblyEnd(jac,MAT_FINAL_ASSEMBLY);CHKERRQ(ierr);
>>     if (bcType == NEUMANN)
>>     {
>>         MatNullSpace   nullspace;
>>         ierr = MatNullSpaceCreate(PETSC_COMM_
>> WORLD,PETSC_TRUE,0,0,&nullspace);CHKERRQ(ierr);
>>         ierr = MatSetNullSpace(J,nullspace);CHKERRQ(ierr);
>>         ierr = MatNullSpaceDestroy(&nullspace);CHKERRQ(ierr);
>>     }
>>     PetscFunctionReturn(0);
>> }
>>
>>
>> On Jun 22, 2017, at 9:23 AM, Matthew Knepley <knepley at gmail.com> wrote:
>>
>> On Wed, Jun 21, 2017 at 8:12 PM, Jason Lefley <jason.lefley at aclectic.com>
>>  wrote:
>>
>>> Hello,
>>>
>>> We are attempting to use the PETSc KSP solver framework in a fluid
>>> dynamics simulation we developed. The solution is part of a pressure
>>> projection and solves a Poisson problem. We use a cell-centered layout with
>>> a regular grid in 3d. We started with ex34.c from the KSP tutorials since
>>> it has the correct calls for the 3d DMDA, uses a cell-centered layout, and
>>> states that it works with multi-grid. We modified the operator construction
>>> function to match the coefficients and Dirichlet boundary conditions used
>>> in our problem (we’d also like to support Neumann but left those out for
>>> now to keep things simple). As a result of the modified boundary
>>> conditions, our version does not perform a null space removal on the right
>>> hand side or operator as the original did. We also modified the right hand
>>> side to contain a sinusoidal pattern for testing. Other than these changes,
>>> our code is the same as the original ex34.c
>>>
>>> With the default KSP options and using CG with the default
>>> pre-conditioner and without a pre-conditioner, we see good convergence.
>>> However, we’d like to accelerate the time to solution further and target
>>> larger problem sizes (>= 1024^3) if possible. Given these objectives,
>>> multi-grid as a pre-conditioner interests us. To understand the improvement
>>> that multi-grid provides, we ran ex45 from the KSP tutorials. ex34 with CG
>>> and no pre-conditioner appears to converge in a single iteration and we
>>> wanted to compare against a problem that has similar convergence patterns
>>> to our problem. Here’s the tests we ran with ex45:
>>>
>>> mpirun -n 16 ./ex45 -da_grid_x 129 -da_grid_y 129 -da_grid_z 129
>>>         time in KSPSolve(): 7.0178e+00
>>>         solver iterations: 157
>>>         KSP final norm of residual: 3.16874e-05
>>>
>>> mpirun -n 16 ./ex45 -da_grid_x 129 -da_grid_y 129 -da_grid_z 129
>>> -ksp_type cg -pc_type none
>>>         time in KSPSolve(): 4.1072e+00
>>>         solver iterations: 213
>>>         KSP final norm of residual: 0.000138866
>>>
>>> mpirun -n 16 ./ex45 -da_grid_x 129 -da_grid_y 129 -da_grid_z 129
>>> -ksp_type cg
>>>         time in KSPSolve(): 3.3962e+00
>>>         solver iterations: 88
>>>         KSP final norm of residual: 6.46242e-05
>>>
>>> mpirun -n 16 ./ex45 -da_grid_x 129 -da_grid_y 129 -da_grid_z 129
>>> -pc_type mg -pc_mg_levels 5 -mg_levels_ksp_type richardson
>>> -mg_levels_ksp_max_it 1 -mg_levels_pc_type bjacobi
>>>         time in KSPSolve(): 1.3201e+00
>>>         solver iterations: 4
>>>         KSP final norm of residual: 8.13339e-05
>>>
>>> mpirun -n 16 ./ex45 -da_grid_x 129 -da_grid_y 129 -da_grid_z 129
>>> -pc_type mg -pc_mg_levels 5 -mg_levels_ksp_type richardson
>>> -mg_levels_ksp_max_it 1 -mg_levels_pc_type bjacobi -ksp_type cg
>>>         time in KSPSolve(): 1.3008e+00
>>>         solver iterations: 4
>>>         KSP final norm of residual: 2.21474e-05
>>>
>>> We found the multi-grid pre-conditioner options in the KSP tutorials
>>> makefile. These results make sense; both the default GMRES and CG solvers
>>> converge and CG without a pre-conditioner takes more iterations. The
>>> multi-grid pre-conditioned runs are pretty dramatically accelerated and
>>> require only a handful of iterations.
>>>
>>> We ran our code (modified ex34.c as described above) with the same
>>> parameters:
>>>
>>> mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>>>         time in KSPSolve(): 5.3729e+00
>>>         solver iterations: 123
>>>         KSP final norm of residual: 0.00595066
>>>
>>> mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>>> -ksp_type cg -pc_type none
>>>         time in KSPSolve(): 3.6154e+00
>>>         solver iterations: 188
>>>         KSP final norm of residual: 0.00505943
>>>
>>> mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>>> -ksp_type cg
>>>         time in KSPSolve(): 3.5661e+00
>>>         solver iterations: 98
>>>         KSP final norm of residual: 0.00967462
>>>
>>> mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>>> -pc_type mg -pc_mg_levels 5 -mg_levels_ksp_type richardson
>>> -mg_levels_ksp_max_it 1 -mg_levels_pc_type bjacobi
>>>         time in KSPSolve(): 4.5606e+00
>>>         solver iterations: 44
>>>         KSP final norm of residual: 949.553
>>>
>>
>> 1) Dave is right
>>
>> 2) In order to see how many iterates to expect, first try using algebraic
>> multigrid
>>
>>   -pc_type gamg
>>
>> This should work out of the box for Poisson
>>
>> 3) For questions like this, we really need to see
>>
>>   -ksp_view -ksp_monitor_true_residual
>>
>> 4) It sounds like you smoother is not strong enough. You could try
>>
>>   -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_self_scale
>> -mg_levels_ksp_max_it 5
>>
>> or maybe GMRES until it works.
>>
>>  Thanks,
>>
>>     Matt
>>
>>
>>> mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128
>>> -pc_type mg -pc_mg_levels 5 -mg_levels_ksp_type richardson
>>> -mg_levels_ksp_max_it 1 -mg_levels_pc_type bjacobi -ksp_type cg
>>>         time in KSPSolve(): 1.5481e+01
>>>         solver iterations: 198
>>>         KSP final norm of residual: 0.916558
>>>
>>> We performed all tests with petsc-3.7.6.
>>>
>>> The trends with CG and GMRES seem consistent with the results from ex45.
>>> However, with multi-grid, something doesn’t seem right. Convergence seems
>>> poor and the solves run for many more iterations than ex45 with multi-grid
>>> as a pre-conditioner. I extensively validated the code that builds the
>>> matrix and also confirmed that the solution produced by CG, when evaluated
>>> with the system of equations elsewhere in our simulation, produces the same
>>> residual as indicated by PETSc. Given that we only made minimal
>>> modifications to the original example code, it seems likely that the
>>> operators constructed for the multi-grid levels are ok.
>>>
>>> We also tried a variety of other suggested parameters for the multi-grid
>>> pre-conditioner as suggested in related mailing list posts but we didn’t
>>> observe any significant improvements over the results above.
>>>
>>> Is there anything we can do to check the validity of the coefficient
>>> matrices built for the different multi-grid levels? Does it look like there
>>> could be problems there? Or any other suggestions to achieve better results
>>> with multi-grid? I have the -log_view, -ksp_view, and convergence monitor
>>> output from the above tests and can post any of it if it would assist.
>>>
>>> Thanks
>>
>>
>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> http://www.caam.rice.edu/~mk51/
>>
>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> http://www.caam.rice.edu/~mk51/
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

http://www.caam.rice.edu/~mk51/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170623/ca764a21/attachment-0001.html>


More information about the petsc-users mailing list