[petsc-users] Question about Set-up of Full MG and its Output

Barry Smith bsmith at mcs.anl.gov
Wed Dec 7 18:00:42 CST 2016


   Frank,

   There is no "default" interpolation for PCMG. It is always defined depending on how you setup the solver. If you use KSP with a DM then it uses calls to DM to generate the interpolation (for example with DMDA it uses either piecewise bi/trilinear interpolation or piecewise constant). With GAMG it uses one defined by the algebraic multigrid algorithm.

   I think it is returning nothing because the KSP in your case has not been fully set up yet. Try calling AFTER the KSPSolve() because by then the PCMG infrastructure is fully set up.

  See below also
> On Dec 7, 2016, at 5:48 PM, frank <hengjiew at uci.edu> wrote:
> 
> Hello,
> 
> Thank you. Now I am able to see the trace of MG.
> I still have a question about the interpolation. I wan to get the matrix of the default interpolation method and print it on terminal.
> The code is as follow: ( KSP is already set by petsc options)
> ----------------------------------------------------------------------------------------- 
> 132   CALL KSPGetPC( ksp, pc, ierr )

Remove the next 4 lines

> 133   CALL MATCreate( PETSC_COMM_WORLD, interpMat, ierr )
> 134   CALL MATSetType( interpMat, MATSEQAIJ, ierr )
> 135   CALL MATSetSizes( interpMat, i5, i5, i5, i5, ierr )
> 136   CALL MATSetUp( interpMat, ierr )

> 137   CALL PCMGGetInterpolation( pc, i1, interpMat, ierr )

Remove the next 2 lines
> 138   CALL MatAssemblyBegin( interpMat, MAT_FINAL_ASSEMBLY, ierr )
> 139   CALL MatAssemblyEnd( interpMat, MAT_FINAL_ASSEMBLY, ierr )

> 140   CALL MatView( interpMat, PETSC_VIEWER_STDOUT_SELF, ierr )
> -----------------------------------------------------------------------------------------
> 
> The error massage is:
> -------------------------------------------------------------------------------------------------------
> [0]PETSC ERROR: Object is in wrong state
> [0]PETSC ERROR: Must call PCMGSetInterpolation() or PCMGSetRestriction()
> -------------------------------------------------------------------------------------------------------
> 
> Do I have to set the interpolation first? How can I just print the default interpolation matrix?
> I attached the option file.
> 
> Thank you.
> Frank
> 
> 
> 
> On 12/06/2016 02:31 PM, Jed Brown wrote:
>> frank <hengjiew at uci.edu> writes:
>> 
>>> Dear all,
>>> 
>>> I am trying to use full MG to solve a 2D Poisson equation.
>>> 
>>> I want to set full MG as the solver and SOR as the smoother. Is the
>>> following setup the proper way to do it?
>>>   -ksp_type                    richardson
>>>   -pc_type                      mg
>>>   -pc_mg_type               full
>>>   -mg_levels_ksp_type   richardson
>>>   -mg_levels_pc_type    sor
>>> 
>>> The ksp_view shows the levels from the coarsest mesh to finest mesh in a
>>> linear order.
>> It is showing the solver configuration, not a trace of the cycle.
>> 
>>> I was expecting sth like:  coarsest -> level1 -> coarsest -> level1 ->
>>> level2 -> level1 -> coarsest -> ...
>>> Is there a way to show exactly how the full MG proceeds?
>> You could get a trace like this from
>> 
>> -mg_coarse_ksp_converged_reason -mg_levels_ksp_converged_reason
>> 
>> If you want to deliminate the iterations, you could add -ksp_monitor.
>> 
>>> Also in the above example, I want to know what interpolation or
>>> prolongation method is used from level1 to level2.
>>> Can I get that info by adding some options? (not using PCMGGetInterpolation)
>>> 
>>> I attached the ksp_view info and my petsc options file.
>>> Thank you.
>>> 
>>> Frank
>>> Linear solve converged due to CONVERGED_RTOL iterations 3
>>> KSP Object: 1 MPI processes
>>>   type: richardson
>>>     Richardson: damping factor=1.
>>>   maximum iterations=10000
>>>   tolerances:  relative=1e-07, absolute=1e-50, divergence=10000.
>>>   left preconditioning
>>>   using nonzero initial guess
>>>   using UNPRECONDITIONED norm type for convergence test
>>> PC Object: 1 MPI processes
>>>   type: mg
>>>     MG: type is FULL, levels=6 cycles=v
>>>       Using Galerkin computed coarse grid matrices
>>>   Coarse grid solver -- level -------------------------------
>>>     KSP Object: (mg_coarse_) 1 MPI processes
>>>       type: preonly
>>>       maximum iterations=10000, initial guess is zero
>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>       left preconditioning
>>>       using NONE norm type for convergence test
>>>     PC Object: (mg_coarse_) 1 MPI processes
>>>       type: lu
>>>         out-of-place factorization
>>>         tolerance for zero pivot 2.22045e-14
>>>         using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
>>>         matrix ordering: nd
>>>         factor fill ratio given 0., needed 0.
>>>           Factored matrix follows:
>>>             Mat Object: 1 MPI processes
>>>               type: superlu_dist
>>>               rows=64, cols=64
>>>               package used to perform factorization: superlu_dist
>>>               total: nonzeros=0, allocated nonzeros=0
>>>               total number of mallocs used during MatSetValues calls =0
>>>                 SuperLU_DIST run parameters:
>>>                   Process grid nprow 1 x npcol 1
>>>                   Equilibrate matrix TRUE
>>>                   Matrix input mode 0
>>>                   Replace tiny pivots FALSE
>>>                   Use iterative refinement FALSE
>>>                   Processors in row 1 col partition 1
>>>                   Row permutation LargeDiag
>>>                   Column permutation METIS_AT_PLUS_A
>>>                   Parallel symbolic factorization FALSE
>>>                   Repeated factorization SamePattern
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI processes
>>>         type: seqaij
>>>         rows=64, cols=64
>>>         total: nonzeros=576, allocated nonzeros=576
>>>         total number of mallocs used during MatSetValues calls =0
>>>           not using I-node routines
>>>   Down solver (pre-smoother) on level 1 -------------------------------
>>>     KSP Object: (mg_levels_1_) 1 MPI processes
>>>       type: richardson
>>>         Richardson: damping factor=1.
>>>       maximum iterations=1
>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>       left preconditioning
>>>       using nonzero initial guess
>>>       using NONE norm type for convergence test
>>>     PC Object: (mg_levels_1_) 1 MPI processes
>>>       type: sor
>>>         SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI processes
>>>         type: seqaij
>>>         rows=256, cols=256
>>>         total: nonzeros=2304, allocated nonzeros=2304
>>>         total number of mallocs used during MatSetValues calls =0
>>>           not using I-node routines
>>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>>   Down solver (pre-smoother) on level 2 -------------------------------
>>>     KSP Object: (mg_levels_2_) 1 MPI processes
>>>       type: richardson
>>>         Richardson: damping factor=1.
>>>       maximum iterations=1
>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>       left preconditioning
>>>       using nonzero initial guess
>>>       using NONE norm type for convergence test
>>>     PC Object: (mg_levels_2_) 1 MPI processes
>>>       type: sor
>>>         SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI processes
>>>         type: seqaij
>>>         rows=1024, cols=1024
>>>         total: nonzeros=9216, allocated nonzeros=9216
>>>         total number of mallocs used during MatSetValues calls =0
>>>           not using I-node routines
>>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>>   Down solver (pre-smoother) on level 3 -------------------------------
>>>     KSP Object: (mg_levels_3_) 1 MPI processes
>>>       type: richardson
>>>         Richardson: damping factor=1.
>>>       maximum iterations=1
>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>       left preconditioning
>>>       using nonzero initial guess
>>>       using NONE norm type for convergence test
>>>     PC Object: (mg_levels_3_) 1 MPI processes
>>>       type: sor
>>>         SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI processes
>>>         type: seqaij
>>>         rows=4096, cols=4096
>>>         total: nonzeros=36864, allocated nonzeros=36864
>>>         total number of mallocs used during MatSetValues calls =0
>>>           not using I-node routines
>>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>>   Down solver (pre-smoother) on level 4 -------------------------------
>>>     KSP Object: (mg_levels_4_) 1 MPI processes
>>>       type: richardson
>>>         Richardson: damping factor=1.
>>>       maximum iterations=1
>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>       left preconditioning
>>>       using nonzero initial guess
>>>       using NONE norm type for convergence test
>>>     PC Object: (mg_levels_4_) 1 MPI processes
>>>       type: sor
>>>         SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI processes
>>>         type: seqaij
>>>         rows=16384, cols=16384
>>>         total: nonzeros=147456, allocated nonzeros=147456
>>>         total number of mallocs used during MatSetValues calls =0
>>>           not using I-node routines
>>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>>   Down solver (pre-smoother) on level 5 -------------------------------
>>>     KSP Object: (mg_levels_5_) 1 MPI processes
>>>       type: richardson
>>>         Richardson: damping factor=1.
>>>       maximum iterations=1
>>>       tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
>>>       left preconditioning
>>>       using nonzero initial guess
>>>       using NONE norm type for convergence test
>>>     PC Object: (mg_levels_5_) 1 MPI processes
>>>       type: sor
>>>         SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.
>>>       linear system matrix = precond matrix:
>>>       Mat Object: 1 MPI processes
>>>         type: seqaij
>>>         rows=65536, cols=65536
>>>         total: nonzeros=327680, allocated nonzeros=327680
>>>         total number of mallocs used during MatSetValues calls =0
>>>           has attached null space
>>>           not using I-node routines
>>>   Up solver (post-smoother) same as down solver (pre-smoother)
>>>   linear system matrix = precond matrix:
>>>   Mat Object: 1 MPI processes
>>>     type: seqaij
>>>     rows=65536, cols=65536
>>>     total: nonzeros=327680, allocated nonzeros=327680
>>>     total number of mallocs used during MatSetValues calls =0
>>>       has attached null space
>>>       not using I-node routines
>>> #PETSc Option Table entries:
>>> -ksp_converged_reason
>>> -ksp_initial_guess_nonzero yes
>>> -ksp_norm_type unpreconditioned
>>> -ksp_rtol 1e-7
>>> -ksp_type richardson
>>> -ksp_view
>>> -mg_coarse_ksp_type preonly
>>> -mg_coarse_pc_factor_mat_solver_package superlu_dist
>>> -mg_coarse_pc_type lu
>>> -mg_levels_ksp_max_it 1
>>> -mg_levels_ksp_type richardson
>>> -mg_levels_pc_type sor
>>> -N 256
>>> -options_left
>>> -pc_mg_galerkin
>>> -pc_mg_levels 6
>>> -pc_mg_type full
>>> -pc_type mg
>>> -px 1
>>> -py 1
>>> #End of PETSc Option Table entries
>>> There are no unused options.
>>> -ksp_type        richardson
>>> -ksp_norm_type   unpreconditioned
>>> -ksp_rtol        1e-7
>>> -options_left
>>> -ksp_initial_guess_nonzero  yes
>>> -ksp_converged_reason
>>> -ksp_view
>>> -pc_type mg
>>> -pc_mg_type full
>>> -pc_mg_galerkin
>>> -pc_mg_levels 6
>>> -mg_levels_ksp_type richardson
>>> -mg_levels_pc_type sor
>>> -mg_levels_ksp_max_it 1
>>> -mg_coarse_ksp_type preonly
>>> -mg_coarse_pc_type lu
>>> -mg_coarse_pc_factor_mat_solver_package superlu_dist
> 
> <petsc_options.txt>



More information about the petsc-users mailing list