[petsc-users] Fwd: Problem running ex54f with GAMG

Fabio Durastante fabio.durastante at unipi.it
Fri Mar 18 10:44:24 CDT 2022


Hi everybody,

I'm trying to run the rotated anisotropy example ex54f using CG and GAMG as preconditioner, I run it with the command:

mpirun -np 2 ./ex54f -ne 1011 \
-theta 18.0 \
-epsilon 100.0 \
-pc_type gamg \
-pc_gamg_type agg \
-log_view \
-log_trace \
-ksp_view \
-ksp_monitor \
-ksp_type cg \
-mg_levels_pc_type jacobi \
-mg_levels_ksp_type richardson \
-mg_levels_ksp_max_it 4 \
-ksp_atol 1e-9 \
-ksp_rtol 1e-12

But the KSP CG seems to stop just after two iterations:

   0 KSP Residual norm 6.666655711717e-02
   1 KSP Residual norm 9.859661350927e-03

I'm attaching the full log, the problem seems to appear when I modify the value of epsilon, if I leave it to the default (1.0) it prints

   0 KSP Residual norm 5.862074869050e+00
   1 KSP Residual norm 5.132711016122e-01
   2 KSP Residual norm 1.198566629717e-01
   3 KSP Residual norm 1.992885901625e-02
   4 KSP Residual norm 4.919780086064e-03
   5 KSP Residual norm 1.417045143681e-03
   6 KSP Residual norm 3.559622318760e-04
   7 KSP Residual norm 9.270786187701e-05
   8 KSP Residual norm 1.886403709163e-05
   9 KSP Residual norm 2.940634415714e-06
  10 KSP Residual norm 5.015043022637e-07
  11 KSP Residual norm 9.760219712757e-08
  12 KSP Residual norm 2.320857464659e-08
  13 KSP Residual norm 4.563772507631e-09
  14 KSP Residual norm 8.896675476997e-10

that is very strange because the case with epsilon 1 should be easier.

Any help with this would be great.

Thank you very much,

Fabio Durastante
-------------- next part --------------
  0 KSP Residual norm 6.666655711717e-02 
  1 KSP Residual norm 9.859661350927e-03 
KSP Object: 2 MPI processes
  type: cg
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-12, absolute=1e-09, divergence=10000.
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object: 2 MPI processes
  type: gamg
    type is MULTIPLICATIVE, levels=6 cycles=v
      Cycles per PCApply=1
      Using externally compute Galerkin coarse grid matrices
      GAMG specific options
        Threshold for dropping small values in graph on each level =   0.   0.   0.   0.   0.   0.  
        Threshold scaling factor for each level not specified = 1.
        AGG specific options
          Symmetric graph false
          Number of levels to square graph 1
          Number smoothing steps 1
        Complexity:    grid = 1.13597
  Coarse grid solver -- level -------------------------------
    KSP Object: (mg_coarse_) 2 MPI processes
      type: preonly
      maximum iterations=10000, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_coarse_) 2 MPI processes
      type: bjacobi
        number of blocks = 2
        Local solver information for first block is in the following KSP and PC objects on rank 0:
        Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks
      KSP Object: (mg_coarse_sub_) 1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using NONE norm type for convergence test
      PC Object: (mg_coarse_sub_) 1 MPI processes
        type: lu
          out-of-place factorization
          tolerance for zero pivot 2.22045e-14
          using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
          matrix ordering: nd
          factor fill ratio given 5., needed 1.
            Factored matrix follows:
              Mat Object: 1 MPI processes
                type: seqaij
                rows=6, cols=6
                package used to perform factorization: petsc
                total: nonzeros=36, allocated nonzeros=36
                  using I-node routines: found 2 nodes, limit used is 5
        linear system matrix = precond matrix:
        Mat Object: (mg_coarse_sub_) 1 MPI processes
          type: seqaij
          rows=6, cols=6
          total: nonzeros=36, allocated nonzeros=36
          total number of mallocs used during MatSetValues calls=0
            using I-node routines: found 2 nodes, limit used is 5
      linear system matrix = precond matrix:
      Mat Object: 2 MPI processes
        type: mpiaij
        rows=6, cols=6
        total: nonzeros=36, allocated nonzeros=36
        total number of mallocs used during MatSetValues calls=0
          using I-node (on process 0) routines: found 2 nodes, limit used is 5
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object: (mg_levels_1_) 2 MPI processes
      type: richardson
        damping factor=1.
      maximum iterations=4, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_1_) 2 MPI processes
      type: jacobi
        type DIAGONAL
      linear system matrix = precond matrix:
      Mat Object: 2 MPI processes
        type: mpiaij
        rows=80, cols=80
        total: nonzeros=2236, allocated nonzeros=2236
        total number of mallocs used during MatSetValues calls=0
          using nonscalable MatPtAP() implementation
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object: (mg_levels_2_) 2 MPI processes
      type: richardson
        damping factor=1.
      maximum iterations=4, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_2_) 2 MPI processes
      type: jacobi
        type DIAGONAL
      linear system matrix = precond matrix:
      Mat Object: 2 MPI processes
        type: mpiaij
        rows=1086, cols=1086
        total: nonzeros=34476, allocated nonzeros=34476
        total number of mallocs used during MatSetValues calls=0
          using nonscalable MatPtAP() implementation
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object: (mg_levels_3_) 2 MPI processes
      type: richardson
        damping factor=1.
      maximum iterations=4, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_3_) 2 MPI processes
      type: jacobi
        type DIAGONAL
      linear system matrix = precond matrix:
      Mat Object: 2 MPI processes
        type: mpiaij
        rows=12368, cols=12368
        total: nonzeros=304142, allocated nonzeros=304142
        total number of mallocs used during MatSetValues calls=0
          using nonscalable MatPtAP() implementation
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 4 -------------------------------
    KSP Object: (mg_levels_4_) 2 MPI processes
      type: richardson
        damping factor=1.
      maximum iterations=4, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_4_) 2 MPI processes
      type: jacobi
        type DIAGONAL
      linear system matrix = precond matrix:
      Mat Object: 2 MPI processes
        type: mpiaij
        rows=77337, cols=77337
        total: nonzeros=910755, allocated nonzeros=910755
        total number of mallocs used during MatSetValues calls=0
          using nonscalable MatPtAP() implementation
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 5 -------------------------------
    KSP Object: (mg_levels_5_) 2 MPI processes
      type: richardson
        damping factor=1.
      maximum iterations=4, nonzero initial guess
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
      left preconditioning
      using NONE norm type for convergence test
    PC Object: (mg_levels_5_) 2 MPI processes
      type: jacobi
        type DIAGONAL
      linear system matrix = precond matrix:
      Mat Object: 2 MPI processes
        type: mpiaij
        rows=1024144, cols=1024144
        total: nonzeros=9205156, allocated nonzeros=15362160
        total number of mallocs used during MatSetValues calls=0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object: 2 MPI processes
    type: mpiaij
    rows=1024144, cols=1024144
    total: nonzeros=9205156, allocated nonzeros=15362160
    total number of mallocs used during MatSetValues calls=0
      not using I-node (on process 0) routines
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./ex54f on a arch-linux-c-opt named grace with 2 processors, by fabiod Fri Mar 18 16:38:04 2022
Using Petsc Release Version 3.16.3, unknown 

                         Max       Max/Min     Avg       Total
Time (sec):           2.719e+00     1.000   2.719e+00
Objects:              5.450e+02     1.004   5.440e+02
Flop:                 5.535e+08     1.001   5.532e+08  1.106e+09
Flop/sec:             2.035e+08     1.001   2.034e+08  4.069e+08
MPI Messages:         3.205e+02     1.009   3.190e+02  6.380e+02
MPI Message Lengths:  1.207e+06     1.000   3.783e+03  2.413e+06
MPI Reductions:       5.740e+02     1.000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flop
                            and VecAXPY() for complex vectors of length N --> 8N flop

Summary of Stages:   ----- Time ------  ----- Flop ------  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total    Count   %Total     Avg         %Total    Count   %Total
 0:      Main Stage: 2.7193e+00 100.0%  1.1065e+09 100.0%  6.380e+02 100.0%  3.783e+03      100.0%  5.540e+02  96.5%

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flop: Max - maximum over all processors
                  Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   AvgLen: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flop in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flop                              --- Global ---  --- Stage ----  Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   AvgLen  Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

BuildTwoSided         71 1.0 7.1392e-0214.7 0.00e+00 0.0 7.1e+01 4.0e+00 7.1e+01  1  0 11  0 12   1  0 11  0 13     0
BuildTwoSidedF        44 1.0 7.1353e-0215.9 0.00e+00 0.0 3.8e+01 2.0e+04 4.4e+01  1  0  6 31  8   1  0  6 31  8     0
MatMult              131 1.0 1.9677e-01 1.0 2.66e+08 1.0 2.8e+02 2.7e+03 5.0e+00  7 48 44 32  1   7 48 44 32  1  2704
MatMultAdd            10 1.0 8.1694e-03 1.0 5.91e+06 1.0 1.8e+01 6.9e+02 0.0e+00  0  1  3  1  0   0  1  3  1  0  1446
MatMultTranspose      10 1.0 8.6451e-03 1.0 5.91e+06 1.0 3.6e+01 4.3e+02 5.0e+00  0  1  6  1  1   0  1  6  1  1  1367
MatSolve               2 0.0 7.2710e-06 0.0 1.32e+02 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    18
MatLUFactorSym         1 1.0 1.3105e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         1 1.0 7.9300e-06 2.1 1.29e+02 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    16
MatConvert             5 1.0 3.5570e-02 1.0 0.00e+00 0.0 2.0e+01 7.1e+02 5.0e+00  1  0  3  1  1   1  0  3  1  1     0
MatScale              15 1.0 2.1281e-02 1.0 1.34e+07 1.0 1.0e+01 2.8e+03 0.0e+00  1  2  2  1  0   1  2  2  1  0  1260
MatResidual           10 1.0 1.6194e-02 1.0 2.09e+07 1.0 2.0e+01 2.8e+03 0.0e+00  1  4  3  2  0   1  4  3  2  0  2583
MatAssemblyBegin      84 1.0 7.2778e-0216.0 0.00e+00 0.0 3.8e+01 2.0e+04 2.9e+01  1  0  6 31  5   1  0  6 31  5     0
MatAssemblyEnd        84 1.0 1.8844e-01 1.0 7.86e+03 2.3 0.0e+00 0.0e+00 9.6e+01  7  0  0  0 17   7  0  0  0 17     0
MatGetRowIJ            1 0.0 5.4830e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCreateSubMat        2 1.0 2.3775e-04 1.0 0.00e+00 0.0 5.0e+00 7.0e+01 2.8e+01  0  0  1  0  5   0  0  1  0  5     0
MatGetOrdering         1 0.0 4.9585e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             5 1.0 3.4196e-02 1.0 0.00e+00 0.0 5.8e+01 1.9e+03 1.4e+01  1  0  9  5  2   1  0  9  5  3     0
MatZeroEntries         5 1.0 7.8601e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatView                9 1.3 8.4609e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00  0  0  0  0  1   0  0  0  0  1     0
MatAXPY                5 1.0 3.8052e-02 1.0 5.58e+05 1.0 0.0e+00 0.0e+00 5.0e+00  1  0  0  0  1   1  0  0  0  1    29
MatTranspose          10 1.0 1.1442e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatMatMultSym         15 1.0 1.1629e-01 1.0 0.00e+00 0.0 3.0e+01 1.9e+03 4.5e+01  4  0  5  2  8   4  0  5  2  8     0
MatMatMultNum         15 1.0 4.2265e-02 1.0 2.69e+07 1.0 1.0e+01 2.8e+03 5.0e+00  2  5  2  1  1   2  5  2  1  1  1272
MatPtAPSymbolic        5 1.0 1.9834e-01 1.0 0.00e+00 0.0 6.0e+01 3.7e+03 3.5e+01  7  0  9  9  6   7  0  9  9  6     0
MatPtAPNumeric         5 1.0 7.2245e-02 1.0 4.51e+07 1.0 2.0e+01 7.8e+03 2.5e+01  3  8  3  6  4   3  8  3  6  5  1247
MatTrnMatMultSym       1 1.0 6.6188e-01 1.0 0.00e+00 0.0 1.0e+01 6.2e+04 1.2e+01 24  0  2 26  2  24  0  2 26  2     0
MatGetLocalMat        16 1.0 6.5550e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  2  0  0  0  0   2  0  0  0  0     0
MatGetBrAoCol         15 1.0 2.9662e-03 1.6 0.00e+00 0.0 7.0e+01 3.7e+03 0.0e+00  0  0 11 11  0   0  0 11 11  0     0
VecMDot               50 1.0 2.5929e-02 1.0 6.13e+07 1.0 0.0e+00 0.0e+00 5.0e+01  1 11  0  0  9   1 11  0  0  9  4730
VecTDot                3 1.0 1.6664e-03 1.1 3.07e+06 1.0 0.0e+00 0.0e+00 3.0e+00  0  1  0  0  1   0  1  0  0  1  3687
VecNorm               57 1.0 4.3805e-03 1.1 1.43e+07 1.0 0.0e+00 0.0e+00 5.7e+01  0  3  0  0 10   0  3  0  0 10  6535
VecScale              55 1.0 3.1254e-03 1.1 6.13e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  3924
VecCopy               17 1.0 5.6865e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                80 1.0 2.9919e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               87 1.0 9.4864e-03 1.0 2.10e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  4  0  0  0   0  4  0  0  0  4428
VecAYPX               80 1.0 1.1726e-02 1.1 8.92e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   0  2  0  0  0  1521
VecMAXPY              55 1.0 3.7916e-02 1.0 7.25e+07 1.0 0.0e+00 0.0e+00 0.0e+00  1 13  0  0  0   1 13  0  0  0  3823
VecAssemblyBegin      16 1.0 1.0659e-03 7.7 0.00e+00 0.0 0.0e+00 0.0e+00 1.5e+01  0  0  0  0  3   0  0  0  0  3     0
VecAssemblyEnd        16 1.0 1.6375e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult     135 1.0 2.7489e-02 1.0 1.51e+07 1.0 0.0e+00 0.0e+00 0.0e+00  1  3  0  0  0   1  3  0  0  0  1095
VecScatterBegin      172 1.0 8.5210e-04 1.1 0.00e+00 0.0 4.0e+02 2.7e+03 1.7e+01  0  0 63 45  3   0  0 63 45  3     0
VecScatterEnd        172 1.0 2.9094e-03 2.0 9.20e+02 1.4 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     1
VecSetRandom           5 1.0 1.2987e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          55 1.0 6.9701e-03 1.0 1.84e+07 1.0 0.0e+00 0.0e+00 5.5e+01  0  3  0  0 10   0  3  0  0 10  5279
SFSetGraph            35 1.0 1.4065e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetUp               27 1.0 1.0133e-03 1.4 0.00e+00 0.0 1.0e+02 8.1e+02 2.7e+01  0  0 16  3  5   0  0 16  3  5     0
SFBcastBegin          19 1.0 6.6749e-05 1.1 0.00e+00 0.0 3.8e+01 2.3e+03 0.0e+00  0  0  6  4  0   0  0  6  4  0     0
SFBcastEnd            19 1.0 1.0335e-03 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFPack               191 1.0 8.2768e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFUnpack             191 1.0 3.5429e-05 1.0 9.20e+02 1.4 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    44
KSPSetUp              13 1.0 1.3468e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+01  0  0  0  0  2   0  0  0  0  2     0
KSPSolve               1 1.0 1.9267e-01 1.0 2.22e+08 1.0 2.2e+02 2.3e+03 2.0e+01  7 40 34 21  3   7 40 34 21  4  2302
KSPGMRESOrthog        50 1.0 5.8824e-02 1.0 1.23e+08 1.0 0.0e+00 0.0e+00 5.0e+01  2 22  0  0  9   2 22  0  0  9  4170
PCGAMGGraph_AGG        5 1.0 3.8613e-01 1.0 1.05e+07 1.0 3.0e+01 1.4e+03 4.5e+01 14  2  5  2  8  14  2  5  2  8    54
PCGAMGCoarse_AGG       5 1.0 8.1589e-01 1.0 0.00e+00 0.0 8.6e+01 1.1e+04 3.7e+01 30  0 13 37  6  30  0 13 37  7     0
PCGAMGProl_AGG         5 1.0 1.2226e-01 1.0 0.00e+00 0.0 4.8e+01 2.3e+03 7.9e+01  4  0  8  5 14   4  0  8  5 14     0
PCGAMGPOpt_AGG         5 1.0 3.6809e-01 1.0 2.73e+08 1.0 1.6e+02 2.4e+03 2.0e+02 14 49 25 16 36  14 49 25 16 37  1480
GAMG: createProl       5 1.0 1.6962e+00 1.0 2.83e+08 1.0 3.2e+02 4.4e+03 3.7e+02 62 51 51 60 64  62 51 51 60 66   334
  Graph               10 1.0 3.8577e-01 1.0 1.05e+07 1.0 3.0e+01 1.4e+03 4.5e+01 14  2  5  2  8  14  2  5  2  8    54
  MIS/Agg              5 1.0 3.4274e-02 1.0 0.00e+00 0.0 5.8e+01 1.9e+03 1.4e+01  1  0  9  5  2   1  0  9  5  3     0
  SA: col data         5 1.0 2.4279e-02 1.0 0.00e+00 0.0 3.6e+01 2.6e+03 3.4e+01  1  0  6  4  6   1  0  6  4  6     0
  SA: frmProl0         5 1.0 9.1539e-02 1.0 0.00e+00 0.0 1.2e+01 1.3e+03 2.5e+01  3  0  2  1  4   3  0  2  1  5     0
  SA: smooth           5 1.0 1.5983e-01 1.0 1.40e+07 1.0 4.0e+01 2.1e+03 6.5e+01  6  3  6  4 11   6  3  6  4 12   175
GAMG: partLevel        5 1.0 2.7117e-01 1.0 4.51e+07 1.0 9.4e+01 4.0e+03 1.1e+02 10  8 15 16 20  10  8 15 16 20   332
  repartition          1 1.0 5.0492e-04 1.0 0.00e+00 0.0 1.4e+01 3.3e+01 5.3e+01  0  0  2  0  9   0  0  2  0 10     0
  Invert-Sort          1 1.0 1.1274e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  1   0  0  0  0  1     0
  Move A               1 1.0 1.8661e-04 1.0 0.00e+00 0.0 5.0e+00 7.0e+01 1.5e+01  0  0  1  0  3   0  0  1  0  3     0
  Move P               1 1.0 1.2168e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.6e+01  0  0  0  0  3   0  0  0  0  3     0
PCGAMG Squ l00         1 1.0 6.6188e-01 1.0 0.00e+00 0.0 1.0e+01 6.2e+04 1.2e+01 24  0  2 26  2  24  0  2 26  2     0
PCGAMG Gal l00         1 1.0 2.2394e-01 1.0 3.54e+07 1.0 1.6e+01 8.7e+03 1.2e+01  8  6  3  6  2   8  6  3  6  2   316
PCGAMG Opt l00         1 1.0 9.5932e-02 1.0 9.21e+06 1.0 8.0e+00 6.1e+03 1.0e+01  4  2  1  2  2   4  2  1  2  2   192
PCGAMG Gal l01         1 1.0 3.6699e-02 1.0 7.11e+06 1.0 1.6e+01 8.0e+03 1.2e+01  1  1  3  5  2   1  1  3  5  2   385
PCGAMG Opt l01         1 1.0 9.6930e-03 1.0 9.13e+05 1.0 8.0e+00 2.3e+03 1.0e+01  0  0  1  1  2   0  0  1  1  2   188
PCGAMG Gal l02         1 1.0 8.7276e-03 1.0 2.33e+06 1.0 1.6e+01 5.0e+03 1.2e+01  0  0  3  3  2   0  0  3  3  2   534
PCGAMG Opt l02         1 1.0 2.3489e-03 1.0 3.07e+05 1.0 8.0e+00 1.5e+03 1.0e+01  0  0  1  1  2   0  0  1  1  2   259
PCGAMG Gal l03         1 1.0 1.0658e-03 1.0 2.61e+05 1.1 1.6e+01 1.7e+03 1.2e+01  0  0  3  1  2   0  0  3  1  2   478
PCGAMG Opt l03         1 1.0 3.5707e-04 1.0 3.46e+04 1.0 8.0e+00 5.7e+02 1.0e+01  0  0  1  0  2   0  0  1  0  2   193
PCGAMG Gal l04         1 1.0 2.2163e-04 1.0 1.02e+04 1.4 1.6e+01 1.9e+02 1.2e+01  0  0  3  0  2   0  0  3  0  2    80
PCGAMG Opt l04         1 1.0 1.0728e-04 1.0 2.43e+03 1.2 8.0e+00 1.6e+02 1.0e+01  0  0  1  0  2   0  0  1  0  2    42
PCSetUp                2 1.0 1.9752e+00 1.0 3.28e+08 1.0 4.2e+02 4.3e+03 5.1e+02 73 59 66 75 88  73 59 66 75 92   332
PCSetUpOnBlocks        2 1.0 1.5289e-04 1.2 1.29e+02 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     1
PCApply                2 1.0 1.7902e-01 1.0 2.06e+08 1.0 2.1e+02 2.3e+03 1.5e+01  7 37 34 20  3   7 37 34 20  3  2300
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

           Container    10             10         5840     0.
              Matrix   141            141    611009048     0.
      Matrix Coarsen     5              5         3160     0.
              Vector   220            220    158846504     0.
           Index Set    70             70        88748     0.
   Star Forest Graph    45             45        53640     0.
       Krylov Solver    13             13       166952     0.
      Preconditioner    13             13        13776     0.
              Viewer     3              2         1696     0.
         PetscRandom    10             10         6700     0.
    Distributed Mesh     5              5        25280     0.
     Discrete System     5              5         4520     0.
           Weak Form     5              5         3120     0.
========================================================================================================================
Average time to get PetscTime(): 2.16e-08
Average time for MPI_Barrier(): 3.11e-07
Average time for zero size MPI_Send(): 6.87e-07
#PETSc Option Table entries:
-epsilon 100.0
-ksp_atol 1e-9
-ksp_monitor
-ksp_rtol 1e-12
-ksp_type cg
-ksp_view
-log_trace
-log_view
-mg_levels_ksp_max_it 4
-mg_levels_ksp_type richardson
-mg_levels_pc_type jacobi
-ne 1011
-pc_gamg_type agg
-pc_type gamg
-theta 18.0
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-trilinos-configure-arguments="-DTPL_ENABLE_Boost=OFF -DTPL_ENABLE_Matio=OFF" --download-hypre --download-netcdf --download-hdf5 --download-zlib --download-make --download-ml --with-debugging=0 COPTFLAGS="-g -O3" CXXOPTFLAGS="-g -O3" FOPTFLAGS="-g -O3" CUDAOPTFLAGS=-O3
-----------------------------------------
Libraries compiled on 2022-02-23 22:16:38 on grace 
Machine characteristics: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-centos-7.9.2009-Core
Using PETSc directory: /home/fabiod/anisotropy/petsc
Using PETSc arch: arch-linux-c-opt
-----------------------------------------

Using C compiler: mpicc  -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g -O3   
Using Fortran compiler: mpif90  -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g -O3     
-----------------------------------------

Using include paths: -I/home/fabiod/anisotropy/petsc/include -I/home/fabiod/anisotropy/petsc/arch-linux-c-opt/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/home/fabiod/anisotropy/petsc/arch-linux-c-opt/lib -L/home/fabiod/anisotropy/petsc/arch-linux-c-opt/lib -lpetsc -Wl,-rpath,/home/fabiod/anisotropy/petsc/arch-linux-c-opt/lib -L/home/fabiod/anisotropy/petsc/arch-linux-c-opt/lib -Wl,-rpath,/opt/openmpi/lib -L/opt/openmpi/lib -Wl,-rpath,/opt/gcc61/lib/gcc/x86_64-pc-linux-gnu/6.1.0 -L/opt/gcc61/lib/gcc/x86_64-pc-linux-gnu/6.1.0 -Wl,-rpath,/opt/gcc61/lib64 -L/opt/gcc61/lib64 -Wl,-rpath,/opt/gcc61/lib -L/opt/gcc61/lib -lHYPRE -lml -lopenblas -lnetcdf -lhdf5_hl -lhdf5 -lm -lz -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lquadmath -lstdc++ -ldl
-----------------------------------------



More information about the petsc-users mailing list