[petsc-users] Convergence of AMG

Manav Bhatia bhatiamanav at gmail.com
Sun Oct 28 16:12:05 CDT 2018


Var: 0,…,5  are the 6 variables that I am solving for: u, v, w, theta_x, theta_y, theta_z. 

The norms identified in my email are the L2 norms of all dofs corresponding to each variable in the solution vector. So, var: 0: u: norm is the L2 norm of the dofs for u only, and so on. 

I expect u, v, theta_z to be zero for the solution, which ends up being the case. 

If I plot the solution, they look sensible, but the reduction of KSP norm is slow. 


Thanks,
Manav

> On Oct 28, 2018, at 3:55 PM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
> 
> 
> 
>> On Oct 28, 2018, at 12:16 PM, Manav Bhatia <bhatiamanav at gmail.com> wrote:
>> 
>> Hi, 
>> 
>>   I am attempting to solve a Mindlin plate bending problem with AMG solver in petsc. This test case is with a mesh of 300x300 elements and 543,606 dofs. 
>> 
>>   The discretization includes 6 variables (u, v, w, tx, ty, tz), but only three are relevant for plate bending (w, tx, ty). 
>> 
>>   I am calling the solver with the following options: 
>> 
>> -pc_type gamg -pc_gamg_threshold 0. --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -ksp_monitor -ksp_converged_reason -ksp_view 
>> 
>>  And the convergence behavior is shown below, along with the ksp_view information. Based on notes in the manual, this seems to be subpar convergence rate. At the end of the solution the norm of each variable is : 
>> 
>> var: 0: u  : norm: 5.505909e-18
>> var: 1: v  : norm: 7.639640e-18
>> var: 2: w : norm: 3.901464e-03
>> var: 3: tx : norm: 4.403576e-02
>> var: 4: ty : norm: 4.403576e-02
>> var: 5: tz : norm: 1.148409e-16
> 
>   What do you mean by var: 2: w : norm etc? Is this the norm of the error for that variable, the norm of the residual, something else? How exactly are you calculating it?
> 
>    Thanks
> 
> 
>   Barry
> 
>> 
>>  I tried different values of -ksp_rtol from 1e-1 to 1e-8 and this does not make a lot of difference in the norms of (w, tx, ty). 
>> 
>>  I do provide the solver with 6 rigid-body vectors to approximate the null-space of the problem. Without these the solver shows very poor convergence. 
>> 
>>  I would appreciate advice on possible strategies to improve this behavior. 
>> 
>> Thanks,
>> Manav 
>> 
>>    0 KSP Residual norm 1.696304497261e+00 
>>    1 KSP Residual norm 1.120485505777e+00 
>>    2 KSP Residual norm 8.324222302402e-01 
>>    3 KSP Residual norm 6.477349534115e-01 
>>    4 KSP Residual norm 5.080936471292e-01 
>>    5 KSP Residual norm 4.051099646638e-01 
>>    6 KSP Residual norm 3.260432664653e-01 
>>    7 KSP Residual norm 2.560483838143e-01 
>>    8 KSP Residual norm 2.029943986124e-01 
>>    9 KSP Residual norm 1.560985741610e-01 
>>   10 KSP Residual norm 1.163720702140e-01 
>>   11 KSP Residual norm 8.488411085459e-02 
>>   12 KSP Residual norm 5.888041729034e-02 
>>   13 KSP Residual norm 4.027792209980e-02 
>>   14 KSP Residual norm 2.819048087304e-02 
>>   15 KSP Residual norm 1.904674196962e-02 
>>   16 KSP Residual norm 1.289302447822e-02 
>>   17 KSP Residual norm 9.162203296376e-03 
>>   18 KSP Residual norm 7.016781679507e-03 
>>   19 KSP Residual norm 5.399170865328e-03 
>>   20 KSP Residual norm 4.254385887482e-03 
>>   21 KSP Residual norm 3.530831740621e-03 
>>   22 KSP Residual norm 2.946780747923e-03 
>>   23 KSP Residual norm 2.339361361128e-03 
>>   24 KSP Residual norm 1.815072489282e-03 
>>   25 KSP Residual norm 1.408814185342e-03 
>>   26 KSP Residual norm 1.063795714320e-03 
>>   27 KSP Residual norm 7.828540233117e-04 
>>   28 KSP Residual norm 5.683910750067e-04 
>>   29 KSP Residual norm 4.131151010250e-04 
>>   30 KSP Residual norm 3.065608221019e-04 
>>   31 KSP Residual norm 2.634114273459e-04 
>>   32 KSP Residual norm 2.198180137626e-04 
>>   33 KSP Residual norm 1.748956510799e-04 
>>   34 KSP Residual norm 1.317539710010e-04 
>>   35 KSP Residual norm 9.790121566055e-05 
>>   36 KSP Residual norm 7.465935386094e-05 
>>   37 KSP Residual norm 5.689506626052e-05 
>>   38 KSP Residual norm 4.413136619126e-05 
>>   39 KSP Residual norm 3.512194236402e-05 
>>   40 KSP Residual norm 2.877755408287e-05 
>>   41 KSP Residual norm 2.340080556431e-05 
>>   42 KSP Residual norm 1.904544450345e-05 
>>   43 KSP Residual norm 1.504723478235e-05 
>>   44 KSP Residual norm 1.141381950576e-05 
>>   45 KSP Residual norm 8.206151384599e-06 
>>   46 KSP Residual norm 5.911426091276e-06 
>>   47 KSP Residual norm 4.233669089283e-06 
>>   48 KSP Residual norm 2.898052944223e-06 
>>   49 KSP Residual norm 2.023556779973e-06 
>>   50 KSP Residual norm 1.459108043935e-06 
>>   51 KSP Residual norm 1.097335545865e-06 
>>   52 KSP Residual norm 8.440457332262e-07 
>>   53 KSP Residual norm 6.705616854004e-07 
>>   54 KSP Residual norm 5.404888680234e-07 
>>   55 KSP Residual norm 4.391368084979e-07 
>>   56 KSP Residual norm 3.697063014621e-07 
>>   57 KSP Residual norm 3.021772094146e-07 
>>   58 KSP Residual norm 2.479354520792e-07 
>>   59 KSP Residual norm 2.013077841968e-07 
>>   60 KSP Residual norm 1.553159612793e-07 
>>   61 KSP Residual norm 1.400784224898e-07 
>>   62 KSP Residual norm 9.707453662195e-08 
>>   63 KSP Residual norm 7.263173080146e-08 
>>   64 KSP Residual norm 5.593723572132e-08 
>>   65 KSP Residual norm 4.448788809586e-08 
>>   66 KSP Residual norm 3.613992590778e-08 
>>   67 KSP Residual norm 2.946099051876e-08 
>>   68 KSP Residual norm 2.408053564170e-08 
>>   69 KSP Residual norm 1.945257374856e-08 
>>   70 KSP Residual norm 1.572494535110e-08 
>> 
>> 
>> KSP Object: 4 MPI processes
>>  type: gmres
>>    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>    happy breakdown tolerance 1e-30
>>  maximum iterations=10000, initial guess is zero
>>  tolerances:  relative=1e-08, absolute=1e-50, divergence=10000.
>>  left preconditioning
>>  using PRECONDITIONED norm type for convergence test
>> PC Object: 4 MPI processes
>>  type: gamg
>>    type is MULTIPLICATIVE, levels=6 cycles=v
>>      Cycles per PCApply=1
>>      Using externally compute Galerkin coarse grid matrices
>>      GAMG specific options
>>        Threshold for dropping small values in graph on each level =   0.   0.   0.   0.  
>>        Threshold scaling factor for each level not specified = 1.
>>        AGG specific options
>>          Symmetric graph false
>>          Number of levels to square graph 1
>>          Number smoothing steps 1
>>  Coarse grid solver -- level -------------------------------
>>    KSP Object: (mg_coarse_) 4 MPI processes
>>      type: preonly
>>      maximum iterations=10000, initial guess is zero
>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>      left preconditioning
>>      using NONE norm type for convergence test
>>    PC Object: (mg_coarse_) 4 MPI processes
>>      type: bjacobi
>>        number of blocks = 4
>>        Local solve is same for all blocks, in the following KSP and PC objects:
>>      KSP Object: (mg_coarse_sub_) 1 MPI processes
>>        type: preonly
>>        maximum iterations=1, initial guess is zero
>>        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>        left preconditioning
>>        using NONE norm type for convergence test
>>      PC Object: (mg_coarse_sub_) 1 MPI processes
>>        type: lu
>>          out-of-place factorization
>>          tolerance for zero pivot 2.22045e-14
>>          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>>          matrix ordering: nd
>>          factor fill ratio given 5., needed 1.
>>            Factored matrix follows:
>>              Mat Object: 1 MPI processes
>>                type: seqaij
>>                rows=6, cols=6, bs=6
>>                package used to perform factorization: petsc
>>                total: nonzeros=36, allocated nonzeros=36
>>                total number of mallocs used during MatSetValues calls =0
>>                  using I-node routines: found 2 nodes, limit used is 5
>>        linear system matrix = precond matrix:
>>        Mat Object: 1 MPI processes
>>          type: seqaij
>>          rows=6, cols=6, bs=6
>>          total: nonzeros=36, allocated nonzeros=36
>>          total number of mallocs used during MatSetValues calls =0
>>            using I-node routines: found 2 nodes, limit used is 5
>>      linear system matrix = precond matrix:
>>      Mat Object: 4 MPI processes
>>        type: mpiaij
>>        rows=6, cols=6, bs=6
>>        total: nonzeros=36, allocated nonzeros=36
>>        total number of mallocs used during MatSetValues calls =0
>>          using nonscalable MatPtAP() implementation
>>          using I-node (on process 0) routines: found 2 nodes, limit used is 5
>>  Down solver (pre-smoother) on level 1 -------------------------------
>>    KSP Object: (mg_levels_1_) 4 MPI processes
>>      type: chebyshev
>>        eigenvalue estimates used:  min = 0.099971, max = 1.09968
>>        eigenvalues estimate via gmres min 0.154032, max 0.99971
>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>        KSP Object: (mg_levels_1_esteig_) 4 MPI processes
>>          type: gmres
>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>            happy breakdown tolerance 1e-30
>>          maximum iterations=10, initial guess is zero
>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>          left preconditioning
>>          using PRECONDITIONED norm type for convergence test
>>        estimating eigenvalues using noisy right hand side
>>      maximum iterations=2, nonzero initial guess
>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>      left preconditioning
>>      using NONE norm type for convergence test
>>    PC Object: (mg_levels_1_) 4 MPI processes
>>      type: sor
>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>      linear system matrix = precond matrix:
>>      Mat Object: 4 MPI processes
>>        type: mpiaij
>>        rows=54, cols=54, bs=6
>>        total: nonzeros=2916, allocated nonzeros=2916
>>        total number of mallocs used during MatSetValues calls =0
>>          using I-node (on process 0) routines: found 11 nodes, limit used is 5
>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>  Down solver (pre-smoother) on level 2 -------------------------------
>>    KSP Object: (mg_levels_2_) 4 MPI processes
>>      type: chebyshev
>>        eigenvalue estimates used:  min = 0.171388, max = 1.88526
>>        eigenvalues estimate via gmres min 0.0717873, max 1.71388
>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>        KSP Object: (mg_levels_2_esteig_) 4 MPI processes
>>          type: gmres
>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>            happy breakdown tolerance 1e-30
>>          maximum iterations=10, initial guess is zero
>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>          left preconditioning
>>          using PRECONDITIONED norm type for convergence test
>>        estimating eigenvalues using noisy right hand side
>>      maximum iterations=2, nonzero initial guess
>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>      left preconditioning
>>      using NONE norm type for convergence test
>>    PC Object: (mg_levels_2_) 4 MPI processes
>>      type: sor
>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>      linear system matrix = precond matrix:
>>      Mat Object: 4 MPI processes
>>        type: mpiaij
>>        rows=642, cols=642, bs=6
>>        total: nonzeros=99468, allocated nonzeros=99468
>>        total number of mallocs used during MatSetValues calls =0
>>          using nonscalable MatPtAP() implementation
>>          using I-node (on process 0) routines: found 47 nodes, limit used is 5
>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>  Down solver (pre-smoother) on level 3 -------------------------------
>>    KSP Object: (mg_levels_3_) 4 MPI processes
>>      type: chebyshev
>>        eigenvalue estimates used:  min = 0.164216, max = 1.80637
>>        eigenvalues estimate via gmres min 0.0376323, max 1.64216
>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>        KSP Object: (mg_levels_3_esteig_) 4 MPI processes
>>          type: gmres
>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>            happy breakdown tolerance 1e-30
>>          maximum iterations=10, initial guess is zero
>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>          left preconditioning
>>          using PRECONDITIONED norm type for convergence test
>>        estimating eigenvalues using noisy right hand side
>>      maximum iterations=2, nonzero initial guess
>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>      left preconditioning
>>      using NONE norm type for convergence test
>>    PC Object: (mg_levels_3_) 4 MPI processes
>>      type: sor
>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>      linear system matrix = precond matrix:
>>      Mat Object: 4 MPI processes
>>        type: mpiaij
>>        rows=6726, cols=6726, bs=6
>>        total: nonzeros=941796, allocated nonzeros=941796
>>        total number of mallocs used during MatSetValues calls =0
>>          using nonscalable MatPtAP() implementation
>>          using I-node (on process 0) routines: found 552 nodes, limit used is 5
>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>  Down solver (pre-smoother) on level 4 -------------------------------
>>    KSP Object: (mg_levels_4_) 4 MPI processes
>>      type: chebyshev
>>        eigenvalue estimates used:  min = 0.163283, max = 1.79611
>>        eigenvalues estimate via gmres min 0.0350306, max 1.63283
>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>        KSP Object: (mg_levels_4_esteig_) 4 MPI processes
>>          type: gmres
>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>            happy breakdown tolerance 1e-30
>>          maximum iterations=10, initial guess is zero
>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>          left preconditioning
>>          using PRECONDITIONED norm type for convergence test
>>        estimating eigenvalues using noisy right hand side
>>      maximum iterations=2, nonzero initial guess
>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>      left preconditioning
>>      using NONE norm type for convergence test
>>    PC Object: (mg_levels_4_) 4 MPI processes
>>      type: sor
>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>      linear system matrix = precond matrix:
>>      Mat Object: 4 MPI processes
>>        type: mpiaij
>>        rows=41022, cols=41022, bs=6
>>        total: nonzeros=2852316, allocated nonzeros=2852316
>>        total number of mallocs used during MatSetValues calls =0
>>          using nonscalable MatPtAP() implementation
>>          using I-node (on process 0) routines: found 3432 nodes, limit used is 5
>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>  Down solver (pre-smoother) on level 5 -------------------------------
>>    KSP Object: (mg_levels_5_) 4 MPI processes
>>      type: chebyshev
>>        eigenvalue estimates used:  min = 0.157236, max = 1.7296
>>        eigenvalues estimate via gmres min 0.0317897, max 1.57236
>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>        KSP Object: (mg_levels_5_esteig_) 4 MPI processes
>>          type: gmres
>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>            happy breakdown tolerance 1e-30
>>          maximum iterations=10, initial guess is zero
>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>          left preconditioning
>>          using PRECONDITIONED norm type for convergence test
>>        estimating eigenvalues using noisy right hand side
>>      maximum iterations=2, nonzero initial guess
>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>      left preconditioning
>>      using NONE norm type for convergence test
>>    PC Object: (mg_levels_5_) 4 MPI processes
>>      type: sor
>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>      linear system matrix = precond matrix:
>>      Mat Object: () 4 MPI processes
>>        type: mpiaij
>>        rows=543606, cols=543606, bs=6
>>        total: nonzeros=29224836, allocated nonzeros=29302596
>>        total number of mallocs used during MatSetValues calls =0
>>          has attached near null space
>>          using I-node (on process 0) routines: found 45644 nodes, limit used is 5
>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>  linear system matrix = precond matrix:
>>  Mat Object: () 4 MPI processes
>>    type: mpiaij
>>    rows=543606, cols=543606, bs=6
>>    total: nonzeros=29224836, allocated nonzeros=29302596
>>    total number of mallocs used during MatSetValues calls =0
>>      has attached near null space
>>      using I-node (on process 0) routines: found 45644 nodes, limit used is 5

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181028/ec3929ca/attachment-0001.html>


More information about the petsc-users mailing list