[petsc-users] GAMG processor reduction

John Mousel john.mousel at gmail.com
Thu Nov 21 13:39:39 CST 2013


I'm solving a Poisson equation with BiCGStab/GAMG. I'm looking at KSPView
and I see that the problem is not reducing the number of processors on the
coarser grids. My coarse problem is 723x723, and it looks like it's still
being solved on 192 processes. This doesn't seem right, and I'm believe
this used to trigger processor reduction. I'm running with

-pres_ksp_type bcgsl -pres_pc_type gamg -pres_pc_gamg_threshold 0.05
-pres_mg_levels_ksp_type richardson -pres_mg_levels_pc_type sor
-pres_mg_coarse_ksp_type richardson -pres_mg_coarse_pc_type sor
-pres_mg_coarse_pc_sor_its 5 -pres_pc_gamg_type agg
-pres_pc_gamg_agg_nsmooths 1 -pres_pc_gamg_sym_graph true -pres_ksp_monitor
-pres_ksp_view -pres_gamg_process_eq_limit 5000


The KSPView output is:

KSP Object:(pres_) 192 MPI processes
  type: bcgsl
    BCGSL: Ell = 2
    BCGSL: Delta = 0
  maximum iterations=1000, initial guess is zero
  tolerances:  relative=1e-06, absolute=1e-50, divergence=10000
  left preconditioning
  has attached null space
  using PRECONDITIONED norm type for convergence test
PC Object:(pres_) 192 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=4 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (pres_mg_coarse_)     192 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (pres_mg_coarse_)     192 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 5, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       192 MPI processes
        type: mpiaij
        rows=723, cols=723
        total: nonzeros=519221, allocated nonzeros=519221
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 499 nodes, limit used
is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (pres_mg_levels_1_)     192 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (pres_mg_levels_1_)     192 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       192 MPI processes
        type: mpiaij
        rows=15231, cols=15231
        total: nonzeros=4604432, allocated nonzeros=4604432
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 33 nodes, limit used
is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (pres_mg_levels_2_)     192 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (pres_mg_levels_2_)     192 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       192 MPI processes
        type: mpiaij
        rows=242341, cols=242341
        total: nonzeros=18277529, allocated nonzeros=18277529
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (pres_mg_levels_3_)     192 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (pres_mg_levels_3_)     192 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1,
omega = 1
      linear system matrix = precond matrix:
      Mat Object:       192 MPI processes
        type: mpiaij
        rows=3285079, cols=3285079
        total: nonzeros=48995036, allocated nonzeros=53586621
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   192 MPI processes
    type: mpiaij
    rows=3285079, cols=3285079
    total: nonzeros=48995036, allocated nonzeros=53586621
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines



John
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20131121/608ce839/attachment.html>


More information about the petsc-users mailing list