[petsc-users] GAMG and linear elasticity

Tabrez Ali stali at geology.wisc.edu
Tue Aug 27 14:45:42 CDT 2013


Hello

What is the proper way to use GAMG on a vanilla 3D linear elasticity 
problem. Should I use

-pc_type gamg -pc_gamg_type agg -pc_gamg_agg_nsmooths 1

or

-pc_type fieldsplit -pc_fieldsplit_block_size 3 -fieldsplit_pc_type gamg 
-fieldsplit_pc_gamg_type agg -fieldsplit_pc_gamg_agg_nsmooths 1

Do these options even make sense? With the second set of options the % 
increase in number of iterations with increasing problem size is lower 
than the first but not optimal.

Also, ksp/ksp/examples/ex56 performs much better in that the number of 
iterations remain more or less constant unlike what I see with my own 
problem. What am I doing wrong?

The output of -ksp_view for the two set of options used is attached.

Thanks in advance.

Tabrez

-------------- next part --------------
 Reading input ...
 Partitioning mesh ...
 Reading mesh data ...
 Forming [K] ...
 Forming RHS ...
 Setting up solver ...
 Solving ...
  0 KSP Residual norm 5.201733187820e-02 
  1 KSP Residual norm 1.099026395850e-02 
  2 KSP Residual norm 5.042531960219e-03 
  3 KSP Residual norm 2.900154719433e-03 
  4 KSP Residual norm 1.981423195364e-03 
  5 KSP Residual norm 1.427135427398e-03 
  6 KSP Residual norm 1.098375830345e-03 
  7 KSP Residual norm 8.171142731182e-04 
  8 KSP Residual norm 6.241353708263e-04 
  9 KSP Residual norm 4.594842173716e-04 
 10 KSP Residual norm 3.422820541875e-04 
 11 KSP Residual norm 2.288676731103e-04 
 12 KSP Residual norm 1.403795429712e-04 
 13 KSP Residual norm 8.497517268662e-05 
 14 KSP Residual norm 4.612536416341e-05 
 15 KSP Residual norm 2.617765913915e-05 
 16 KSP Residual norm 1.510196277776e-05 
 17 KSP Residual norm 9.019114875021e-06 
 18 KSP Residual norm 6.009327953180e-06 
 19 KSP Residual norm 4.355601035228e-06 
 20 KSP Residual norm 2.944914091024e-06 
 21 KSP Residual norm 1.695461437589e-06 
 22 KSP Residual norm 1.062228336911e-06 
 23 KSP Residual norm 6.663147669163e-07 
 24 KSP Residual norm 4.312489682055e-07 
 25 KSP Residual norm 2.893524615337e-07 
 26 KSP Residual norm 1.914089929812e-07 
 27 KSP Residual norm 1.238817489532e-07 
 28 KSP Residual norm 7.683272381931e-08 
 29 KSP Residual norm 4.169276110310e-08 
 30 KSP Residual norm 2.148781941016e-08 
 31 KSP Residual norm 1.403054516655e-08 
 32 KSP Residual norm 8.805306038787e-09 
 33 KSP Residual norm 5.401864440509e-09 
 34 KSP Residual norm 3.223812851026e-09 
 35 KSP Residual norm 2.014263357765e-09 
 36 KSP Residual norm 1.360071786265e-09 
 37 KSP Residual norm 8.977977623075e-10 
 38 KSP Residual norm 5.671948481098e-10 
 39 KSP Residual norm 3.671046658729e-10 
 40 KSP Residual norm 2.210643616019e-10 
 41 KSP Residual norm 1.495535545659e-10 
 42 KSP Residual norm 1.008918360828e-10 
 43 KSP Residual norm 6.783838063885e-11 
 44 KSP Residual norm 4.352151663612e-11 
KSP Object: 4 MPI processes
  type: gmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 4 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=3 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (mg_coarse_)     4 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (mg_coarse_)     4 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 4
        Local solve info for each block is in the following KSP and PC objects:
      [0] number of local blocks = 1, first local block number = 0
        [0] local block number 0
        KSP Object:        (mg_coarse_sub_)         1 MPI processes
          type: preonly
          maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
              KSP Object:        (mg_coarse_sub_)         1 MPI processes
            KSP Object:        (mg_coarse_sub_)         1 MPI processes
          type: preonly
        KSP Object:        (mg_coarse_sub_)         1 MPI processes
          type: preonly
      PC Object:        (mg_coarse_sub_)         1 MPI processes
          type: lu
          type: preonly
          maximum iterations=1, initial guess is zero
                  maximum iterations=1, initial guess is zero
              maximum iterations=1, initial guess is zero
                LU: out-of-place factorization
            tolerance for zero pivot 2.22045e-14
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
          using diagonal shift on blocks to prevent zero pivot
            matrix ordering: nd
            using NONE norm type for convergence test
        PC Object:        (mg_coarse_sub_)            PC Object:        (mg_coarse_sub_)         1 MPI processes
        PC Object:        (mg_coarse_sub_)         1 MPI processes
        factor fill ratio given 5, needed 1.06642
              Factored matrix follows:
     1 MPI processes
          type: lu
            LU: out-of-place factorization
          type: lu
            LU: out-of-place factorization
              type: lu
            LU: out-of-place factorization
                      Matrix Object:                 1 MPI processes
      tolerance for zero pivot 2.22045e-14
        tolerance for zero pivot 2.22045e-14
                            type: seqaij
          using diagonal shift on blocks to prevent zero pivot
            matrix ordering: nd
  using diagonal shift on blocks to prevent zero pivot
            matrix ordering: nd
    tolerance for zero pivot 2.22045e-14
            using diagonal shift on blocks to prevent zero pivot
                        rows=26, cols=26
                    factor fill ratio given 5, needed 0
              Factored matrix follows:
          factor fill ratio given 5, needed 0
              Factored matrix follows:
  matrix ordering: nd
            factor fill ratio given 5, needed 0
                package used to perform factorization: petsc
                              Matrix Object:                            Matrix Object:                        Factored matrix follows:
                Matrix Object:    total: nonzeros=578, allocated nonzeros=578
                  total number of mallocs used during MatSetValues calls =0
     1 MPI processes
                  type: seqaij
       1 MPI processes
                  type: seqaij
                           1 MPI processes
                                      rows=0, cols=0
                  type: seqaij
                using I-node routines: found 14 nodes, limit used is 5
          linear system matrix = precond matrix:
          Matrix Object:        package used to perform factorization: petsc
              rows=0, cols=0
                  package used to perform factorization: petsc
                  total: nonzeros=1, allocated nonzeros=1
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
           1 MPI processes
            type: seqaij
            rows=26, cols=26
            total: nonzeros=542, allocated nonzeros=542
            total number of mallocs used during MatSetValues calls =0
              not using I-node routines
        - - - - - - - - - - - - - - - - - -
              total: nonzeros=1, allocated nonzeros=1
                  total number of mallocs used during MatSetValues calls =0
                    not using I-node routines
          linear system matrix = precond matrix:
          Matrix Object:           1 MPI processes
            type: seqaij
            rows=0, cols=0
          linear system matrix = precond matrix:
          Matrix Object:              rows=0, cols=0
                            total: nonzeros=0, allocated nonzeros=0
             1 MPI processes
            type: seqaij
            package used to perform factorization: petsc
                total number of mallocs used during MatSetValues calls =0
              not using I-node routines
  rows=0, cols=0
            total: nonzeros=0, allocated nonzeros=0
      total: nonzeros=1, allocated nonzeros=1
                  total number of mallocs used during MatSetValues calls =0
        total number of mallocs used during MatSetValues calls =0
                          not using I-node routines
        not using I-node routines
          linear system matrix = precond matrix:
          Matrix Object:           1 MPI processes
            type: seqaij
            rows=0, cols=0
            total: nonzeros=0, allocated nonzeros=0
            total number of mallocs used during MatSetValues calls =0
              not using I-node routines
      [1] number of local blocks = 1, first local block number = 1
        [1] local block number 0
        - - - - - - - - - - - - - - - - - -
      [2] number of local blocks = 1, first local block number = 2
        [2] local block number 0
        - - - - - - - - - - - - - - - - - -
      [3] number of local blocks = 1, first local block number = 3
        [3] local block number 0
        - - - - - - - - - - - - - - - - - -
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=26, cols=26
        total: nonzeros=542, allocated nonzeros=542
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (mg_levels_1_)     4 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.0661807, max = 1.38979
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_1_)     4 MPI processes
      type: jacobi
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=1386, cols=1386
        total: nonzeros=49460, allocated nonzeros=49460
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (mg_levels_2_)     4 MPI processes
      type: chebyshev
        Chebyshev: eigenvalue estimates:  min = 0.1332, max = 2.7972
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (mg_levels_2_)     4 MPI processes
      type: jacobi
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=202878, cols=202878
        total: nonzeros=15595884, allocated nonzeros=63297936
        total number of mallocs used during MatSetValues calls =0
          using I-node (on process 0) routines: found 16907 nodes, limit used is 5
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   4 MPI processes
    type: mpiaij
    rows=202878, cols=202878
    total: nonzeros=15595884, allocated nonzeros=63297936
    total number of mallocs used during MatSetValues calls =0
      using I-node (on process 0) routines: found 16907 nodes, limit used is 5
 Recovering stress ...
 Cleaning up ...
 Finished

-------------- next part --------------
 Reading input ...
 Partitioning mesh ...
 Reading mesh data ...
 Forming [K] ...
 Forming RHS ...
 Setting up solver ...
 Solving ...
  0 KSP Residual norm 4.062139693015e-02 
  1 KSP Residual norm 1.466113648411e-02 
  2 KSP Residual norm 6.479863911139e-03 
  3 KSP Residual norm 4.569049461591e-03 
  4 KSP Residual norm 3.130857994128e-03 
  5 KSP Residual norm 1.983543889095e-03 
  6 KSP Residual norm 1.156789632219e-03 
  7 KSP Residual norm 5.899045914732e-04 
  8 KSP Residual norm 2.837798321640e-04 
  9 KSP Residual norm 1.359117543889e-04 
 10 KSP Residual norm 6.385081462171e-05 
 11 KSP Residual norm 2.935882041357e-05 
 12 KSP Residual norm 1.493739596377e-05 
 13 KSP Residual norm 9.201338063289e-06 
 14 KSP Residual norm 5.884399324670e-06 
 15 KSP Residual norm 3.613939011973e-06 
 16 KSP Residual norm 2.382929136315e-06 
 17 KSP Residual norm 1.560623578712e-06 
 18 KSP Residual norm 9.197810318628e-07 
 19 KSP Residual norm 5.339056563737e-07 
 20 KSP Residual norm 3.060078898263e-07 
 21 KSP Residual norm 1.707524658269e-07 
 22 KSP Residual norm 9.973870483901e-08 
 23 KSP Residual norm 5.939404758593e-08 
 24 KSP Residual norm 3.323258377859e-08 
 25 KSP Residual norm 1.830778495567e-08 
 26 KSP Residual norm 1.141547456761e-08 
 27 KSP Residual norm 7.355063277008e-09 
 28 KSP Residual norm 4.857944128572e-09 
 29 KSP Residual norm 3.285608748712e-09 
 30 KSP Residual norm 2.021520313423e-09 
 31 KSP Residual norm 1.433518924534e-09 
 32 KSP Residual norm 1.022603460571e-09 
 33 KSP Residual norm 7.063122249368e-10 
 34 KSP Residual norm 4.470858335207e-10 
 35 KSP Residual norm 2.775173681825e-10 
 36 KSP Residual norm 1.703746060374e-10 
 37 KSP Residual norm 9.782267597341e-11 
 38 KSP Residual norm 5.315585921715e-11 
 39 KSP Residual norm 2.846271417839e-11 
KSP Object: 4 MPI processes
  type: gmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-09, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 4 MPI processes
  type: fieldsplit
    FieldSplit with MULTIPLICATIVE composition: total splits = 3, blocksize = 3
    Solver info for each split is in the following KSP objects:
    Split number 0 Fields  0
    KSP Object:    (fieldsplit_0_)     4 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (fieldsplit_0_)     4 MPI processes
      type: gamg
        MG: type is MULTIPLICATIVE, levels=3 cycles=v
          Cycles per PCApply=1
          Using Galerkin computed coarse grid matrices
      Coarse grid solver -- level -------------------------------
        KSP Object:        (fieldsplit_0_mg_coarse_)         4 MPI processes
          type: preonly
          maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_0_mg_coarse_)         4 MPI processes
          type: bjacobi
            block Jacobi: number of blocks = 4
            Local solve info for each block is in the following KSP and PC objects:
          [0] number of local blocks = 1, first local block number = 0
            [0] local block number 0
            KSP Object:            (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
              type: preonly
              maximum iterations=1, initial guess is zero
              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
            KSP Object:            (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
              type: preonly
              maximum iterations=1, initial guess is zero
                        KSP Object:            (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
              type: preonly
              maximum iterations=1, initial guess is zero
                        KSP Object:            (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
              type: preonly
              maximum iterations=1, initial guess is zero
                        PC Object:            (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
              type: lu
                tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
            LU: out-of-place factorization
                  using NONE norm type for convergence test
            PC Object:  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
  tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
              (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
          PC Object:            (fieldsplit_0_mg_coarse_sub_)             1 MPI processes
                                matrix ordering: nd
            type: lu
                LU: out-of-place factorization
                type: lu
                LU: out-of-place factorization
      PC Object:            (fieldsplit_0_mg_coarse_sub_)                    factor fill ratio given 5, needed 1.07615
                  Factored matrix follows:
        tolerance for zero pivot 2.22045e-14
          tolerance for zero pivot 2.22045e-14
       1 MPI processes
              type: lu
                                Matrix Object:                        using diagonal shift on blocks to prevent zero pivot
                        using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
    LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
                factor fill ratio given 5, needed 0
                         1 MPI processes
                      type: seqaij
                      rows=25, cols=25
                      package used to perform factorization: petsc
                      total: nonzeros=537, allocated nonzeros=537
                  matrix ordering: nd
                factor fill ratio given 5, needed 0
                  Factored matrix follows:
                    Matrix Object:                     1 MPI processes
                      type: seqaij
                      rows=0, cols=0
                          factor fill ratio given 5, needed 0
                  Factored matrix follows:
                    Matrix Object:                     1 MPI processes
          Factored matrix follows:
                    Matrix Object:                total number of mallocs used during MatSetValues calls =0
                                    package used to perform factorization: petsc
                                  type: seqaij
                         1 MPI processes
                  using I-node routines: found 13 nodes, limit used is 5
              linear system matrix = precond matrix:
                  total: nonzeros=1, allocated nonzeros=1
                      total number of mallocs used during MatSetValues calls =0
        rows=0, cols=0
                      type: seqaij
                      rows=0, cols=0
  Matrix Object:               1 MPI processes
                type: seqaij
                        not using I-node routines
      package used to perform factorization: petsc
                      total: nonzeros=1, allocated nonzeros=1
                          package used to perform factorization: petsc
                    rows=25, cols=25
                            linear system matrix = precond matrix:
              Matrix Object:                          total number of mallocs used during MatSetValues calls =0
                        total: nonzeros=1, allocated nonzeros=1
            total: nonzeros=499, allocated nonzeros=499
                total number of mallocs used during MatSetValues calls =0
                     1 MPI processes
                type: seqaij
                              not using I-node routines
              linear system matrix = precond matrix:
          total number of mallocs used during MatSetValues calls =0
              not using I-node routines
            - - - - - - - - - - - - - - - - - -
    rows=0, cols=0
                total: nonzeros=0, allocated nonzeros=0
                      Matrix Object:               1 MPI processes
        total number of mallocs used during MatSetValues calls =0
                          type: seqaij
                    not using I-node routines
              linear system matrix = precond matrix:
                    not using I-node routines
          rows=0, cols=0
                Matrix Object:               1 MPI processes
      total: nonzeros=0, allocated nonzeros=0
                            type: seqaij
                  [1] number of local blocks = 1, first local block number = 1
total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
        rows=0, cols=0
                        [1] local block number 0
            - - - - - - - - - - - - - - - - - -
    total: nonzeros=0, allocated nonzeros=0
                total number of mallocs used during MatSetValues calls =0
              [2] number of local blocks = 1, first local block number = 2
              not using I-node routines
            [2] local block number 0
            - - - - - - - - - - - - - - - - - -
          [3] number of local blocks = 1, first local block number = 3
            [3] local block number 0
            - - - - - - - - - - - - - - - - - -
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=25, cols=25
            total: nonzeros=499, allocated nonzeros=499
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Down solver (pre-smoother) on level 1 -------------------------------
        KSP Object:        (fieldsplit_0_mg_levels_1_)         4 MPI processes
          type: chebyshev
            Chebyshev: eigenvalue estimates:  min = 0.0654046, max = 1.3735
          maximum iterations=2
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using nonzero initial guess
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_0_mg_levels_1_)         4 MPI processes
          type: jacobi
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=1416, cols=1416
            total: nonzeros=51260, allocated nonzeros=51260
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Up solver (post-smoother) same as down solver (pre-smoother)
      Down solver (pre-smoother) on level 2 -------------------------------
        KSP Object:        (fieldsplit_0_mg_levels_2_)         4 MPI processes
          type: chebyshev
            Chebyshev: eigenvalue estimates:  min = 0.132851, max = 2.78987
          maximum iterations=2
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using nonzero initial guess
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_0_mg_levels_2_)         4 MPI processes
          type: jacobi
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=67624, cols=67624
            total: nonzeros=1732830, allocated nonzeros=1732830
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Up solver (post-smoother) same as down solver (pre-smoother)
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=67624, cols=67624
        total: nonzeros=1732830, allocated nonzeros=1732830
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
    Split number 1 Fields  1
    KSP Object:    (fieldsplit_1_)     4 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (fieldsplit_1_)     4 MPI processes
      type: gamg
        MG: type is MULTIPLICATIVE, levels=3 cycles=v
          Cycles per PCApply=1
          Using Galerkin computed coarse grid matrices
      Coarse grid solver -- level -------------------------------
        KSP Object:        (fieldsplit_1_mg_coarse_)         4 MPI processes
          type: preonly
          maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_1_mg_coarse_)         4 MPI processes
          type: bjacobi
            block Jacobi: number of blocks = 4
            Local solve info for each block is in the following KSP and PC objects:
          [0] number of local blocks = 1, first local block number = 0
            KSP Object:            KSP Object:                      KSP Object:                      [0] local block number 0
            KSP Object:              (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
      (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
              (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
                    (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
                  type: preonly
              maximum iterations=1, initial guess is zero
              type: preonly
              maximum iterations=1, initial guess is zero
                type: preonly
              maximum iterations=1, initial guess is zero
                  type: preonly
              maximum iterations=1, initial guess is zero
              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
          using NONE norm type for convergence test
            PC Object:          using NONE norm type for convergence test
            PC Object:              tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
            PC Object:            using NONE norm type for convergence test
            PC Object:              (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
              type: lu
(fieldsplit_1_mg_coarse_sub_)             1 MPI processes
              type: lu
                (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
        (fieldsplit_1_mg_coarse_sub_)             1 MPI processes
              type: lu
                    LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                    type: lu
                LU: out-of-place factorization
                        LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
              using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
                  tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
                  using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
            factor fill ratio given 5, needed 0
                  Factored matrix follows:
            factor fill ratio given 5, needed 0
                  Factored matrix follows:
                          matrix ordering: nd
                factor fill ratio given 5, needed 0
            factor fill ratio given 5, needed 1.13901
                  Factored matrix follows:
              Matrix Object:                     1 MPI processes
  Matrix Object:                     1 MPI processes
                        Factored matrix follows:
                                  Matrix Object:                                        type: seqaij
                              type: seqaij
                      rows=0, cols=0
  Matrix Object:                     1 MPI processes
         1 MPI processes
                      type: seqaij
                rows=0, cols=0
                      package used to perform factorization: petsc
                        package used to perform factorization: petsc
                          type: seqaij
                            rows=24, cols=24
                                      total: nonzeros=1, allocated nonzeros=1
                            total: nonzeros=1, allocated nonzeros=1
                            rows=0, cols=0
                          package used to perform factorization: petsc
                      total: nonzeros=508, allocated nonzeros=508
          total number of mallocs used during MatSetValues calls =0
                        not using I-node routines
total number of mallocs used during MatSetValues calls =0
                        not using I-node routines
              linear system matrix = precond matrix:
              Matrix Object:               1 MPI processes
                type: seqaij
                rows=0, cols=0
      package used to perform factorization: petsc
                      total: nonzeros=1, allocated nonzeros=1
                      total number of mallocs used during MatSetValues calls =0
                        not using I-node routines
              linear system matrix = precond matrix:
              Matrix Object:                  total number of mallocs used during MatSetValues calls =0
                        using I-node routines: found 14 nodes, limit used is 5
              linear system matrix = precond matrix:
              Matrix Object:               1 MPI processes
                type: seqaij
                rows=24, cols=24
                total: nonzeros=446, allocated nonzeros=446
                      linear system matrix = precond matrix:
              Matrix Object:               1 MPI processes
                type: seqaij
                rows=0, cols=0
                total: nonzeros=0, allocated nonzeros=0
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
          total: nonzeros=0, allocated nonzeros=0
                total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
               1 MPI processes
                type: seqaij
                rows=0, cols=0
                total: nonzeros=0, allocated nonzeros=0
                      total number of mallocs used during MatSetValues calls =0
                  not using I-node routines
            - - - - - - - - - - - - - - - - - -
  total number of mallocs used during MatSetValues calls =0
                        [1] number of local blocks = 1, first local block number = 1
            [1] local block number 0
    not using I-node routines
            - - - - - - - - - - - - - - - - - -
          [2] number of local blocks = 1, first local block number = 2
            [2] local block number 0
            - - - - - - - - - - - - - - - - - -
          [3] number of local blocks = 1, first local block number = 3
            [3] local block number 0
            - - - - - - - - - - - - - - - - - -
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=24, cols=24
            total: nonzeros=446, allocated nonzeros=446
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Down solver (pre-smoother) on level 1 -------------------------------
        KSP Object:        (fieldsplit_1_mg_levels_1_)         4 MPI processes
          type: chebyshev
            Chebyshev: eigenvalue estimates:  min = 0.0662612, max = 1.39149
          maximum iterations=2
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using nonzero initial guess
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_1_mg_levels_1_)         4 MPI processes
          type: jacobi
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=1410, cols=1410
            total: nonzeros=50558, allocated nonzeros=50558
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Up solver (post-smoother) same as down solver (pre-smoother)
      Down solver (pre-smoother) on level 2 -------------------------------
        KSP Object:        (fieldsplit_1_mg_levels_2_)         4 MPI processes
          type: chebyshev
            Chebyshev: eigenvalue estimates:  min = 0.132704, max = 2.78678
          maximum iterations=2
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using nonzero initial guess
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_1_mg_levels_2_)         4 MPI processes
          type: jacobi
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=67624, cols=67624
            total: nonzeros=1732770, allocated nonzeros=1732770
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Up solver (post-smoother) same as down solver (pre-smoother)
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=67624, cols=67624
        total: nonzeros=1732770, allocated nonzeros=1732770
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
    Split number 2 Fields  2
    KSP Object:    (fieldsplit_2_)     4 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (fieldsplit_2_)     4 MPI processes
      type: gamg
        MG: type is MULTIPLICATIVE, levels=3 cycles=v
          Cycles per PCApply=1
          Using Galerkin computed coarse grid matrices
      Coarse grid solver -- level -------------------------------
        KSP Object:        (fieldsplit_2_mg_coarse_)         4 MPI processes
          type: preonly
          maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_2_mg_coarse_)         4 MPI processes
          type: bjacobi
            block Jacobi: number of blocks = 4
            Local solve info for each block is in the following KSP and PC objects:
          [0] number of local blocks = 1, first local block number = 0
                      KSP Object:                KSP Object:                  [0] local block number 0
            KSP Object:            (fieldsplit_2_mg_coarse_sub_)             1 MPI processes
        (fieldsplit_2_mg_coarse_sub_)             1 MPI processes
                  (fieldsplit_2_mg_coarse_sub_)             1 MPI processes
        KSP Object:            (fieldsplit_2_mg_coarse_sub_)             1 MPI processes
                type: preonly
              maximum iterations=1, initial guess is zero
            type: preonly
              maximum iterations=1, initial guess is zero
                    type: preonly
              maximum iterations=1, initial guess is zero
                    type: preonly
              maximum iterations=1, initial guess is zero
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
              using NONE norm type for convergence test
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
              left preconditioning
          using NONE norm type for convergence test
            PC Object:            (fieldsplit_2_mg_coarse_sub_)            PC Object:            (fieldsplit_2_mg_coarse_sub_)                      using NONE norm type for convergence test
            PC Object:              using NONE norm type for convergence test
            PC Object:                       1 MPI processes
              type: lu
         1 MPI processes
              type: lu
                      (fieldsplit_2_mg_coarse_sub_)             1 MPI processes
        (fieldsplit_2_mg_coarse_sub_)             1 MPI processes
              type: lu
                LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
            LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
                factor fill ratio given 5, needed 0
                  Factored matrix follows:
                    Matrix Object:  LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
                factor fill ratio given 5, needed 0
                  Factored matrix follows:
                    Matrix Object:                     1 MPI processes
                  type: lu
                LU: out-of-place factorization
                tolerance for zero pivot 2.22045e-14
                using diagonal shift on blocks to prevent zero pivot
                matrix ordering: nd
                factor fill ratio given 5, needed 0
                  Factored matrix follows:
                              matrix ordering: nd
                factor fill ratio given 5, needed 1.13309
                  Factored matrix follows:
                    Matrix Object:                     1 MPI processes
                                 1 MPI processes
                      type: seqaij
                      rows=0, cols=0
                                  type: seqaij
                      rows=0, cols=0
                      package used to perform factorization: petsc
                Matrix Object:                     1 MPI processes
                            type: seqaij
                      rows=27, cols=27
package used to perform factorization: petsc
                      total: nonzeros=1, allocated nonzeros=1
                    total: nonzeros=1, allocated nonzeros=1
                          type: seqaij
                      rows=0, cols=0
                      package used to perform factorization: petsc
                        total number of mallocs used during MatSetValues calls =0
                  total number of mallocs used during MatSetValues calls =0
                        not using I-node routines
                            package used to perform factorization: petsc
                  total: nonzeros=613, allocated nonzeros=613
                            not using I-node routines
              linear system matrix = precond matrix:
                      linear system matrix = precond matrix:
              Matrix Object:                          total: nonzeros=1, allocated nonzeros=1
              total number of mallocs used during MatSetValues calls =0
                        using I-node routines: found 14 nodes, limit used is 5
Matrix Object:               1 MPI processes
                   1 MPI processes
                type: seqaij
                  total number of mallocs used during MatSetValues calls =0
                                linear system matrix = precond matrix:
              Matrix Object:      type: seqaij
                rows=0, cols=0
                      rows=0, cols=0
                total: nonzeros=0, allocated nonzeros=0
                   1 MPI processes
                type: seqaij
total: nonzeros=0, allocated nonzeros=0
                total number of mallocs used during MatSetValues calls =0
              total number of mallocs used during MatSetValues calls =0
                        not using I-node routines
              linear system matrix = precond matrix:
              Matrix Object:                rows=27, cols=27
                          not using I-node routines
not using I-node routines
               1 MPI processes
            total: nonzeros=541, allocated nonzeros=541
                total number of mallocs used during MatSetValues calls =0
              type: seqaij
                      not using I-node routines
          rows=0, cols=0
                  - - - - - - - - - - - - - - - - - -
    total: nonzeros=0, allocated nonzeros=0
                      [1] number of local blocks = 1, first local block number = 1
    total number of mallocs used during MatSetValues calls =0
                        [1] local block number 0
            - - - - - - - - - - - - - - - - - -
      not using I-node routines
          [2] number of local blocks = 1, first local block number = 2
            [2] local block number 0
            - - - - - - - - - - - - - - - - - -
          [3] number of local blocks = 1, first local block number = 3
            [3] local block number 0
            - - - - - - - - - - - - - - - - - -
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=27, cols=27
            total: nonzeros=541, allocated nonzeros=541
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Down solver (pre-smoother) on level 1 -------------------------------
        KSP Object:        (fieldsplit_2_mg_levels_1_)         4 MPI processes
          type: chebyshev
            Chebyshev: eigenvalue estimates:  min = 0.0659669, max = 1.38531
          maximum iterations=2
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using nonzero initial guess
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_2_mg_levels_1_)         4 MPI processes
          type: jacobi
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=1411, cols=1411
            total: nonzeros=50491, allocated nonzeros=50491
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Up solver (post-smoother) same as down solver (pre-smoother)
      Down solver (pre-smoother) on level 2 -------------------------------
        KSP Object:        (fieldsplit_2_mg_levels_2_)         4 MPI processes
          type: chebyshev
            Chebyshev: eigenvalue estimates:  min = 0.132415, max = 2.78072
          maximum iterations=2
          tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
          left preconditioning
          using nonzero initial guess
          using NONE norm type for convergence test
        PC Object:        (fieldsplit_2_mg_levels_2_)         4 MPI processes
          type: jacobi
          linear system matrix = precond matrix:
          Matrix Object:           4 MPI processes
            type: mpiaij
            rows=67624, cols=67624
            total: nonzeros=1732808, allocated nonzeros=1732808
            total number of mallocs used during MatSetValues calls =0
              not using I-node (on process 0) routines
      Up solver (post-smoother) same as down solver (pre-smoother)
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=67624, cols=67624
        total: nonzeros=1732808, allocated nonzeros=1732808
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  linear system matrix = precond matrix:
  Matrix Object:   4 MPI processes
    type: mpiaij
    rows=202878, cols=202878
    total: nonzeros=15595884, allocated nonzeros=63297936
    total number of mallocs used during MatSetValues calls =0
      using I-node (on process 0) routines: found 16907 nodes, limit used is 5
 Recovering stress ...
 Cleaning up ...
 Finished



More information about the petsc-users mailing list