[petsc-users] Convergence of AMG

Manav Bhatia bhatiamanav at gmail.com
Mon Oct 29 10:55:01 CDT 2018


Hi Mark, 

  Here are some results (still running with 4 cpus): 


With the default options the convergence is slow
-pc_type gamg -ksp_view --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8
    0 KSP Residual norm 1.696304497263e+00 
    1 KSP Residual norm 1.120485505766e+00 
    2 KSP Residual norm 8.324222302220e-01 
    3 KSP Residual norm 6.477349533922e-01 
    4 KSP Residual norm 5.080936471094e-01 
    5 KSP Residual norm 4.051099646451e-01 
    6 KSP Residual norm 3.260432664484e-01 
    7 KSP Residual norm 2.560483838000e-01 
    8 KSP Residual norm 2.029943986006e-01 
    9 KSP Residual norm 1.560985741519e-01 
   10 KSP Residual norm 1.163720702074e-01 
   11 KSP Residual norm 8.488411084998e-02 
   12 KSP Residual norm 5.888041728730e-02 
   13 KSP Residual norm 4.027792209782e-02 
   14 KSP Residual norm 2.819048087173e-02 
   15 KSP Residual norm 1.904674196882e-02 
   16 KSP Residual norm 1.289302447775e-02 
   17 KSP Residual norm 9.162203296105e-03 
   18 KSP Residual norm 7.016781679348e-03 
   19 KSP Residual norm 5.399170865246e-03 
   20 KSP Residual norm 4.254385887447e-03 
   21 KSP Residual norm 3.530831740603e-03 
   22 KSP Residual norm 2.946780747904e-03 
   23 KSP Residual norm 2.339361361103e-03 
   24 KSP Residual norm 1.815072489251e-03 
   25 KSP Residual norm 1.408814185309e-03 
   26 KSP Residual norm 1.063795714289e-03 
   27 KSP Residual norm 7.828540232832e-04 
   28 KSP Residual norm 5.683910749829e-04 
   29 KSP Residual norm 4.131151010060e-04 
   30 KSP Residual norm 3.065608169121e-04 
   31 KSP Residual norm 2.634114212906e-04 
   32 KSP Residual norm 2.198180088890e-04 
   33 KSP Residual norm 1.748956465770e-04 
   34 KSP Residual norm 1.317539664398e-04 
   35 KSP Residual norm 9.790121191782e-05 
   36 KSP Residual norm 7.465935116526e-05 
   37 KSP Residual norm 5.689506439547e-05 
   38 KSP Residual norm 4.413136465026e-05 
   39 KSP Residual norm 3.512194107520e-05 
   40 KSP Residual norm 2.877755304955e-05 
   41 KSP Residual norm 2.340080488088e-05 
   42 KSP Residual norm 1.904544419876e-05 
   43 KSP Residual norm 1.504723479640e-05 
   44 KSP Residual norm 1.141381974873e-05 
   45 KSP Residual norm 8.206151668656e-06 
   46 KSP Residual norm 5.911426282047e-06 
   47 KSP Residual norm 4.233669179704e-06 
   48 KSP Residual norm 2.898052992624e-06 
   49 KSP Residual norm 2.023556817205e-06 
   50 KSP Residual norm 1.459108072651e-06 
   51 KSP Residual norm 1.097335572735e-06 
   52 KSP Residual norm 8.440457530634e-07 
   53 KSP Residual norm 6.705616952049e-07 
   54 KSP Residual norm 5.404888697309e-07 
   55 KSP Residual norm 4.391368066975e-07 
   56 KSP Residual norm 3.697063001345e-07 
   57 KSP Residual norm 3.021772076055e-07 
   58 KSP Residual norm 2.479354498371e-07 
   59 KSP Residual norm 2.013077815815e-07 
   60 KSP Residual norm 1.553178802459e-07 
   61 KSP Residual norm 1.400798352748e-07 
   62 KSP Residual norm 9.707215027303e-08 
   63 KSP Residual norm 7.262869538195e-08 
   64 KSP Residual norm 5.593398375649e-08 
   65 KSP Residual norm 4.448475420166e-08 
   66 KSP Residual norm 3.613734113472e-08 
   67 KSP Residual norm 2.945927212825e-08 
   68 KSP Residual norm 2.407949632330e-08 
   69 KSP Residual norm 1.945210209951e-08 
   70 KSP Residual norm 1.572500364747e-08 



Adding “-pc_mg_levels 2” gives a significantly improved performance: 
-pc_type gamg -ksp_view --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -pc_mg_levels 2
    0 KSP Residual norm 2.980714123240e+01 
    1 KSP Residual norm 1.213007759874e+00 
    2 KSP Residual norm 1.543794101059e-01 
    3 KSP Residual norm 3.522492126064e-02 
    4 KSP Residual norm 7.453170557576e-03 
    5 KSP Residual norm 1.828043480467e-03 
    6 KSP Residual norm 4.779250859781e-04 
    7 KSP Residual norm 1.099093020733e-04 
    8 KSP Residual norm 2.806438906374e-05 
    9 KSP Residual norm 7.416077106013e-06 
   10 KSP Residual norm 1.669576855922e-06 
   11 KSP Residual norm 6.138913423983e-07 
   12 KSP Residual norm 3.914982893935e-07 
   13 KSP Residual norm 2.491167256452e-07 


I did not see an option in the ”-help” output by the name of “-mg_levels_ksp_max_it”, so I added one for each level. Adding “-mg_levels_1_ksp_max_it 4  -mg_levels_2_ksp_max_it 4 “ gives the following convergence rate: 
-pc_type gamg -ksp_view --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -pc_mg_levels 2 -mg_levels_1_ksp_max_it 4 -mg_levels_2_ksp_max_it 4
    0 KSP Residual norm 2.980759132912e+01 
    1 KSP Residual norm 1.268404746671e+00 
    2 KSP Residual norm 1.420311012425e-01 
    3 KSP Residual norm 3.026536678757e-02 
    4 KSP Residual norm 6.511170312990e-03 
    5 KSP Residual norm 1.539620841789e-03 
    6 KSP Residual norm 3.655528499924e-04 
    7 KSP Residual norm 8.111524453983e-05 
    8 KSP Residual norm 1.995956470676e-05 
    9 KSP Residual norm 4.397662980841e-06 
   10 KSP Residual norm 9.636956929342e-07 
   11 KSP Residual norm 3.013384202116e-07 
   12 KSP Residual norm 1.867699579369e-07 


Adding “-pc_gamg_threshold 0.04” gives: 
-pc_type gamg -ksp_view --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -pc_mg_levels 2 -mg_levels_1_ksp_max_it 4 -mg_levels_2_ksp_max_it 4 -pc_gamg_threshold 0.04
    0 KSP Residual norm 2.980759132913e+01 
    1 KSP Residual norm 1.268404746942e+00 
    2 KSP Residual norm 1.420311012570e-01 
    3 KSP Residual norm 3.026536679076e-02 
    4 KSP Residual norm 6.511170313879e-03 
    5 KSP Residual norm 1.539620841827e-03 
    6 KSP Residual norm 3.655528500623e-04 
    7 KSP Residual norm 8.111524453279e-05 
    8 KSP Residual norm 1.995956474349e-05 
    9 KSP Residual norm 4.397662966260e-06 
   10 KSP Residual norm 9.636957102472e-07 
   11 KSP Residual norm 3.013383993102e-07 
   12 KSP Residual norm 1.867700181613e-07 


Using just the “-pc_gamg_square_graph 0” option: 
-pc_type gamg -ksp_view --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -pc_gamg_square_graph 0
    0 KSP Residual norm 2.064808848243e+00 
    1 KSP Residual norm 1.436709062429e+00 
    2 KSP Residual norm 1.043690492056e+00 
    3 KSP Residual norm 7.636589301482e-01 
    4 KSP Residual norm 5.733849171144e-01 
    5 KSP Residual norm 4.416033588916e-01 
    6 KSP Residual norm 3.390846779861e-01 
    7 KSP Residual norm 2.567469096297e-01 
    8 KSP Residual norm 1.874940743028e-01 
    9 KSP Residual norm 1.322422119221e-01 
   10 KSP Residual norm 9.157000749016e-02 
   11 KSP Residual norm 6.376782657530e-02 
   12 KSP Residual norm 4.456316146538e-02 
   13 KSP Residual norm 3.101613919753e-02 
   14 KSP Residual norm 2.167127331495e-02 
   15 KSP Residual norm 1.498469528896e-02 
   16 KSP Residual norm 1.075794635819e-02 
   17 KSP Residual norm 7.764685216272e-03 
   18 KSP Residual norm 5.435228207429e-03 
   19 KSP Residual norm 3.942376675316e-03 
   20 KSP Residual norm 2.846234513499e-03 
   21 KSP Residual norm 1.914323559680e-03 
   22 KSP Residual norm 1.332049518265e-03 
   23 KSP Residual norm 9.414825222665e-04 
   24 KSP Residual norm 6.600086167534e-04 
   25 KSP Residual norm 4.679306216706e-04 
   26 KSP Residual norm 3.264136607741e-04 
   27 KSP Residual norm 2.316366100549e-04 
   28 KSP Residual norm 1.642594683886e-04 
   29 KSP Residual norm 1.103783108795e-04 
   30 KSP Residual norm 7.740619320447e-05 
   31 KSP Residual norm 6.389704142174e-05 
   32 KSP Residual norm 5.006485093406e-05 
   33 KSP Residual norm 3.758456789529e-05 
   34 KSP Residual norm 2.830743556147e-05 
   35 KSP Residual norm 2.062379654606e-05 
   36 KSP Residual norm 1.490349770670e-05 
   37 KSP Residual norm 1.051767740994e-05 
   38 KSP Residual norm 7.040814709680e-06 
   39 KSP Residual norm 4.931079567116e-06 
   40 KSP Residual norm 3.385386183658e-06 
   41 KSP Residual norm 2.274379328203e-06 
   42 KSP Residual norm 1.576979495342e-06 
   43 KSP Residual norm 1.134465108080e-06 
   44 KSP Residual norm 8.164007819038e-07 
   45 KSP Residual norm 5.697298265561e-07 
   46 KSP Residual norm 4.079302286082e-07 
   47 KSP Residual norm 3.032840167758e-07 
   48 KSP Residual norm 2.118663896145e-07 
   49 KSP Residual norm 1.531268272774e-07 
   50 KSP Residual norm 1.155662360224e-07 
   51 KSP Residual norm 8.763548545594e-08 
   52 KSP Residual norm 6.411338426040e-08 
   53 KSP Residual norm 4.720820815019e-08 
   54 KSP Residual norm 3.694420088192e-08 
   55 KSP Residual norm 2.924088208699e-08 
   56 KSP Residual norm 2.365173325198e-08 
   57 KSP Residual norm 1.891630286373e-08 



Without the threshold and max it modifications, and adding “-pc_gamg_square_graph 0” : 

-pc_type gamg -ksp_view --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -pc_mg_levels 2 -pc_gamg_square_graph 0
    0 KSP Residual norm 4.658705362181e+01 
    1 KSP Residual norm 6.723806072355e-01 
    2 KSP Residual norm 4.063455422565e-02 
    3 KSP Residual norm 2.311496772987e-03 
    4 KSP Residual norm 2.337388101209e-04 
    5 KSP Residual norm 2.541042271307e-05 
    6 KSP Residual norm 5.461281412935e-06 
    7 KSP Residual norm 2.718337804133e-06 
    8 KSP Residual norm 1.223645122249e-06 
    9 KSP Residual norm 7.877002862516e-07 
   10 KSP Residual norm 4.742159201655e-07 
   11 KSP Residual norm 2.849093170324e-07 


So, it appears that the number of MG levels has the most significant impact on the convergence rate. 

What would be the reason for this? Is there a general recommendation on MG levels (default options gave 4)? I would suspect that this is problem dependent. 

Regards,
Manav

> On Oct 29, 2018, at 8:28 AM, Mark Adams <mfadams at lbl.gov> wrote:
> 
> 
> I would recommend using '-mg_levels 2' and check that that gives you two levels. I would also run this on one processor just to start. 
> 
> Use -mg_levels_ksp_max_it 4.  And '-pc_gamg_threshold 0.04'
> 
> These parameters are meant to increase convergence rate at all costs. Once we our best rate we should be able to back off of some of this without much degradation or play around with the parameters to optimize run time. 
> 
> 
> On Sun, Oct 28, 2018 at 5:13 PM Manav Bhatia <bhatiamanav at gmail.com <mailto:bhatiamanav at gmail.com>> wrote:
> Var: 0,…,5  are the 6 variables that I am solving for: u, v, w, theta_x, theta_y, theta_z. 
> 
> The norms identified in my email are the L2 norms of all dofs corresponding to each variable in the solution vector. So, var: 0: u: norm is the L2 norm of the dofs for u only, and so on. 
> 
> I expect u, v, theta_z to be zero for the solution, which ends up being the case. 
> 
> If I plot the solution, they look sensible, but the reduction of KSP norm is slow. 
> 
> 
> Thanks,
> Manav
> 
>> On Oct 28, 2018, at 3:55 PM, Smith, Barry F. <bsmith at mcs.anl.gov <mailto:bsmith at mcs.anl.gov>> wrote:
>> 
>> 
>> 
>>> On Oct 28, 2018, at 12:16 PM, Manav Bhatia <bhatiamanav at gmail.com <mailto:bhatiamanav at gmail.com>> wrote:
>>> 
>>> Hi, 
>>> 
>>>   I am attempting to solve a Mindlin plate bending problem with AMG solver in petsc. This test case is with a mesh of 300x300 elements and 543,606 dofs. 
>>> 
>>>   The discretization includes 6 variables (u, v, w, tx, ty, tz), but only three are relevant for plate bending (w, tx, ty). 
>>> 
>>>   I am calling the solver with the following options: 
>>> 
>>> -pc_type gamg -pc_gamg_threshold 0. --node-major-dofs -mat_block_size 6 -ksp_rtol 1.e-8 -ksp_monitor -ksp_converged_reason -ksp_view 
>>> 
>>>  And the convergence behavior is shown below, along with the ksp_view information. Based on notes in the manual, this seems to be subpar convergence rate. At the end of the solution the norm of each variable is : 
>>> 
>>> var: 0: u  : norm: 5.505909e-18
>>> var: 1: v  : norm: 7.639640e-18
>>> var: 2: w : norm: 3.901464e-03
>>> var: 3: tx : norm: 4.403576e-02
>>> var: 4: ty : norm: 4.403576e-02
>>> var: 5: tz : norm: 1.148409e-16
>> 
>>   What do you mean by var: 2: w : norm etc? Is this the norm of the error for that variable, the norm of the residual, something else? How exactly are you calculating it?
>> 
>>    Thanks
>> 
>> 
>>   Barry
>> 
>>> 
>>>  I tried different values of -ksp_rtol from 1e-1 to 1e-8 and this does not make a lot of difference in the norms of (w, tx, ty). 
>>> 
>>>  I do provide the solver with 6 rigid-body vectors to approximate the null-space of the problem. Without these the solver shows very poor convergence. 
>>> 
>>>  I would appreciate advice on possible strategies to improve this behavior. 
>>> 
>>> Thanks,
>>> Manav 
>>> 
>>>    0 KSP Residual norm 1.696304497261e+00 
>>>    1 KSP Residual norm 1.120485505777e+00 
>>>    2 KSP Residual norm 8.324222302402e-01 
>>>    3 KSP Residual norm 6.477349534115e-01 
>>>    4 KSP Residual norm 5.080936471292e-01 
>>>    5 KSP Residual norm 4.051099646638e-01 
>>>    6 KSP Residual norm 3.260432664653e-01 
>>>    7 KSP Residual norm 2.560483838143e-01 
>>>    8 KSP Residual norm 2.029943986124e-01 
>>>    9 KSP Residual norm 1.560985741610e-01 
>>>   10 KSP Residual norm 1.163720702140e-01 
>>>   11 KSP Residual norm 8.488411085459e-02 
>>>   12 KSP Residual norm 5.888041729034e-02 
>>>   13 KSP Residual norm 4.027792209980e-02 
>>>   14 KSP Residual norm 2.819048087304e-02 
>>>   15 KSP Residual norm 1.904674196962e-02 
>>>   16 KSP Residual norm 1.289302447822e-02 
>>>   17 KSP Residual norm 9.162203296376e-03 
>>>   18 KSP Residual norm 7.016781679507e-03 
>>>   19 KSP Residual norm 5.399170865328e-03 
>>>   20 KSP Residual norm 4.254385887482e-03 
>>>   21 KSP Residual norm 3.530831740621e-03 
>>>   22 KSP Residual norm 2.946780747923e-03 
>>>   23 KSP Residual norm 2.339361361128e-03 
>>>   24 KSP Residual norm 1.815072489282e-03 
>>>   25 KSP Residual norm 1.408814185342e-03 
>>>   26 KSP Residual norm 1.063795714320e-03 
>>>   27 KSP Residual norm 7.828540233117e-04 
>>>   28 KSP Residual norm 5.683910750067e-04 
>>>   29 KSP Residual norm 4.131151010250e-04 
>>>   30 KSP Residual norm 3.065608221019e-04 
>>>   31 KSP Residual norm 2.634114273459e-04 
>>>   32 KSP Residual norm 2.198180137626e-04 
>>>   33 KSP Residual norm 1.748956510799e-04 
>>>   34 KSP Residual norm 1.317539710010e-04 
>>>   35 KSP Residual norm 9.790121566055e-05 
>>>   36 KSP Residual norm 7.465935386094e-05 
>>>   37 KSP Residual norm 5.689506626052e-05 
>>>   38 KSP Residual norm 4.413136619126e-05 
>>>   39 KSP Residual norm 3.512194236402e-05 
>>>   40 KSP Residual norm 2.877755408287e-05 
>>>   41 KSP Residual norm 2.340080556431e-05 
>>>   42 KSP Residual norm 1.904544450345e-05 
>>>   43 KSP Residual norm 1.504723478235e-05 
>>>   44 KSP Residual norm 1.141381950576e-05 
>>>   45 KSP Residual norm 8.206151384599e-06 
>>>   46 KSP Residual norm 5.911426091276e-06 
>>>   47 KSP Residual norm 4.233669089283e-06 
>>>   48 KSP Residual norm 2.898052944223e-06 
>>>   49 KSP Residual norm 2.023556779973e-06 
>>>   50 KSP Residual norm 1.459108043935e-06 
>>>   51 KSP Residual norm 1.097335545865e-06 
>>>   52 KSP Residual norm 8.440457332262e-07 
>>>   53 KSP Residual norm 6.705616854004e-07 
>>>   54 KSP Residual norm 5.404888680234e-07 
>>>   55 KSP Residual norm 4.391368084979e-07 
>>>   56 KSP Residual norm 3.697063014621e-07 
>>>   57 KSP Residual norm 3.021772094146e-07 
>>>   58 KSP Residual norm 2.479354520792e-07 
>>>   59 KSP Residual norm 2.013077841968e-07 
>>>   60 KSP Residual norm 1.553159612793e-07 
>>>   61 KSP Residual norm 1.400784224898e-07 
>>>   62 KSP Residual norm 9.707453662195e-08 
>>>   63 KSP Residual norm 7.263173080146e-08 
>>>   64 KSP Residual norm 5.593723572132e-08 
>>>   65 KSP Residual norm 4.448788809586e-08 
>>>   66 KSP Residual norm 3.613992590778e-08 
>>>   67 KSP Residual norm 2.946099051876e-08 
>>>   68 KSP Residual norm 2.408053564170e-08 
>>>   69 KSP Residual norm 1.945257374856e-08 
>>>   70 KSP Residual norm 1.572494535110e-08 
>>> 
>>> 
>>> KSP Object: 4 MPI processes
>>>  type: gmres
>>>    restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>>    happy breakdown tolerance 1e-30
>>>  maximum iterations=10000, initial guess is zero
>>>  tolerances:  relative=1e-08, absolute=1e-50, divergence=10000.
>>>  left preconditioning
>>>  using PRECONDITIONED norm type for convergence test
>>> PC Object: 4 MPI processes
>>>  type: gamg
>>>    type is MULTIPLICATIVE, levels=6 cycles=v
>>>      Cycles per PCApply=1
>>>      Using externally compute Galerkin coarse grid matrices
>>>      GAMG specific options
>>>        Threshold for dropping small values in graph on each level =   0.   0.   0.   0.  
>>>        Threshold scaling factor for each level not specified = 1.
>>>        AGG specific options
>>>          Symmetric graph false
>>>          Number of levels to square graph 1
>>>          Number smoothing steps 1
>>>  Coarse grid solver -- level -------------------------------
>>>    KSP Object: (mg_coarse_) 4 MPI processes
>>>      type: preonly
>>>      maximum iterations=10000, initial guess is zero
>>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>      left preconditioning
>>>      using NONE norm type for convergence test
>>>    PC Object: (mg_coarse_) 4 MPI processes
>>>      type: bjacobi
>>>        number of blocks = 4
>>>        Local solve is same for all blocks, in the following KSP and PC objects:
>>>      KSP Object: (mg_coarse_sub_) 1 MPI processes
>>>        type: preonly
>>>        maximum iterations=1, initial guess is zero
>>>        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>        left preconditioning
>>>        using NONE norm type for convergence test
>>>      PC Object: (mg_coarse_sub_) 1 MPI processes
>>>        type: lu
>>>          out-of-place factorization
>>>          tolerance for zero pivot 2.22045e-14
>>>          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>>>          matrix ordering: nd
>>>          factor fill ratio given 5., needed 1.
>>>            Factored matrix follows:
>>>              Mat Object: 1 MPI processes
>>>                type: seqaij
>>>                rows=6, cols=6, bs=6
>>>                package used to perform factorization: petsc
>>>                total: nonzeros=36, allocated nonzeros=36
>>>                total number of mallocs used during MatSetValues calls =0
>>>                  using I-node routines: found 2 nodes, limit used is 5
>>>        linear system matrix = precond matrix:
>>>        Mat Object: 1 MPI processes
>>>          type: seqaij
>>>          rows=6, cols=6, bs=6
>>>          total: nonzeros=36, allocated nonzeros=36
>>>          total number of mallocs used during MatSetValues calls =0
>>>            using I-node routines: found 2 nodes, limit used is 5
>>>      linear system matrix = precond matrix:
>>>      Mat Object: 4 MPI processes
>>>        type: mpiaij
>>>        rows=6, cols=6, bs=6
>>>        total: nonzeros=36, allocated nonzeros=36
>>>        total number of mallocs used during MatSetValues calls =0
>>>          using nonscalable MatPtAP() implementation
>>>          using I-node (on process 0) routines: found 2 nodes, limit used is 5
>>>  Down solver (pre-smoother) on level 1 -------------------------------
>>>    KSP Object: (mg_levels_1_) 4 MPI processes
>>>      type: chebyshev
>>>        eigenvalue estimates used:  min = 0.099971, max = 1.09968
>>>        eigenvalues estimate via gmres min 0.154032, max 0.99971
>>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>>        KSP Object: (mg_levels_1_esteig_) 4 MPI processes
>>>          type: gmres
>>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>>            happy breakdown tolerance 1e-30
>>>          maximum iterations=10, initial guess is zero
>>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>>          left preconditioning
>>>          using PRECONDITIONED norm type for convergence test
>>>        estimating eigenvalues using noisy right hand side
>>>      maximum iterations=2, nonzero initial guess
>>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>      left preconditioning
>>>      using NONE norm type for convergence test
>>>    PC Object: (mg_levels_1_) 4 MPI processes
>>>      type: sor
>>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>      linear system matrix = precond matrix:
>>>      Mat Object: 4 MPI processes
>>>        type: mpiaij
>>>        rows=54, cols=54, bs=6
>>>        total: nonzeros=2916, allocated nonzeros=2916
>>>        total number of mallocs used during MatSetValues calls =0
>>>          using I-node (on process 0) routines: found 11 nodes, limit used is 5
>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>  Down solver (pre-smoother) on level 2 -------------------------------
>>>    KSP Object: (mg_levels_2_) 4 MPI processes
>>>      type: chebyshev
>>>        eigenvalue estimates used:  min = 0.171388, max = 1.88526
>>>        eigenvalues estimate via gmres min 0.0717873, max 1.71388
>>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>>        KSP Object: (mg_levels_2_esteig_) 4 MPI processes
>>>          type: gmres
>>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>>            happy breakdown tolerance 1e-30
>>>          maximum iterations=10, initial guess is zero
>>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>>          left preconditioning
>>>          using PRECONDITIONED norm type for convergence test
>>>        estimating eigenvalues using noisy right hand side
>>>      maximum iterations=2, nonzero initial guess
>>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>      left preconditioning
>>>      using NONE norm type for convergence test
>>>    PC Object: (mg_levels_2_) 4 MPI processes
>>>      type: sor
>>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>      linear system matrix = precond matrix:
>>>      Mat Object: 4 MPI processes
>>>        type: mpiaij
>>>        rows=642, cols=642, bs=6
>>>        total: nonzeros=99468, allocated nonzeros=99468
>>>        total number of mallocs used during MatSetValues calls =0
>>>          using nonscalable MatPtAP() implementation
>>>          using I-node (on process 0) routines: found 47 nodes, limit used is 5
>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>  Down solver (pre-smoother) on level 3 -------------------------------
>>>    KSP Object: (mg_levels_3_) 4 MPI processes
>>>      type: chebyshev
>>>        eigenvalue estimates used:  min = 0.164216, max = 1.80637
>>>        eigenvalues estimate via gmres min 0.0376323, max 1.64216
>>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>>        KSP Object: (mg_levels_3_esteig_) 4 MPI processes
>>>          type: gmres
>>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>>            happy breakdown tolerance 1e-30
>>>          maximum iterations=10, initial guess is zero
>>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>>          left preconditioning
>>>          using PRECONDITIONED norm type for convergence test
>>>        estimating eigenvalues using noisy right hand side
>>>      maximum iterations=2, nonzero initial guess
>>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>      left preconditioning
>>>      using NONE norm type for convergence test
>>>    PC Object: (mg_levels_3_) 4 MPI processes
>>>      type: sor
>>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>      linear system matrix = precond matrix:
>>>      Mat Object: 4 MPI processes
>>>        type: mpiaij
>>>        rows=6726, cols=6726, bs=6
>>>        total: nonzeros=941796, allocated nonzeros=941796
>>>        total number of mallocs used during MatSetValues calls =0
>>>          using nonscalable MatPtAP() implementation
>>>          using I-node (on process 0) routines: found 552 nodes, limit used is 5
>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>  Down solver (pre-smoother) on level 4 -------------------------------
>>>    KSP Object: (mg_levels_4_) 4 MPI processes
>>>      type: chebyshev
>>>        eigenvalue estimates used:  min = 0.163283, max = 1.79611
>>>        eigenvalues estimate via gmres min 0.0350306, max 1.63283
>>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>>        KSP Object: (mg_levels_4_esteig_) 4 MPI processes
>>>          type: gmres
>>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>>            happy breakdown tolerance 1e-30
>>>          maximum iterations=10, initial guess is zero
>>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>>          left preconditioning
>>>          using PRECONDITIONED norm type for convergence test
>>>        estimating eigenvalues using noisy right hand side
>>>      maximum iterations=2, nonzero initial guess
>>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>      left preconditioning
>>>      using NONE norm type for convergence test
>>>    PC Object: (mg_levels_4_) 4 MPI processes
>>>      type: sor
>>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>      linear system matrix = precond matrix:
>>>      Mat Object: 4 MPI processes
>>>        type: mpiaij
>>>        rows=41022, cols=41022, bs=6
>>>        total: nonzeros=2852316, allocated nonzeros=2852316
>>>        total number of mallocs used during MatSetValues calls =0
>>>          using nonscalable MatPtAP() implementation
>>>          using I-node (on process 0) routines: found 3432 nodes, limit used is 5
>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>  Down solver (pre-smoother) on level 5 -------------------------------
>>>    KSP Object: (mg_levels_5_) 4 MPI processes
>>>      type: chebyshev
>>>        eigenvalue estimates used:  min = 0.157236, max = 1.7296
>>>        eigenvalues estimate via gmres min 0.0317897, max 1.57236
>>>        eigenvalues estimated using gmres with translations  [0. 0.1; 0. 1.1]
>>>        KSP Object: (mg_levels_5_esteig_) 4 MPI processes
>>>          type: gmres
>>>            restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
>>>            happy breakdown tolerance 1e-30
>>>          maximum iterations=10, initial guess is zero
>>>          tolerances:  relative=1e-12, absolute=1e-50, divergence=10000.
>>>          left preconditioning
>>>          using PRECONDITIONED norm type for convergence test
>>>        estimating eigenvalues using noisy right hand side
>>>      maximum iterations=2, nonzero initial guess
>>>      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>      left preconditioning
>>>      using NONE norm type for convergence test
>>>    PC Object: (mg_levels_5_) 4 MPI processes
>>>      type: sor
>>>        type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>      linear system matrix = precond matrix:
>>>      Mat Object: () 4 MPI processes
>>>        type: mpiaij
>>>        rows=543606, cols=543606, bs=6
>>>        total: nonzeros=29224836, allocated nonzeros=29302596
>>>        total number of mallocs used during MatSetValues calls =0
>>>          has attached near null space
>>>          using I-node (on process 0) routines: found 45644 nodes, limit used is 5
>>>  Up solver (post-smoother) same as down solver (pre-smoother)
>>>  linear system matrix = precond matrix:
>>>  Mat Object: () 4 MPI processes
>>>    type: mpiaij
>>>    rows=543606, cols=543606, bs=6
>>>    total: nonzeros=29224836, allocated nonzeros=29302596
>>>    total number of mallocs used during MatSetValues calls =0
>>>      has attached near null space
>>>      using I-node (on process 0) routines: found 45644 nodes, limit used is 5
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181029/493251c6/attachment-0001.html>


More information about the petsc-users mailing list