MPI startup(): "ON" shm transport is not supported. "bdw_avx2" shm transport will be used. A size: 508257, 508257 [0] PCSetUp_GAMG(): (null): level 0) N=1016514, n data rows=3, n data cols=3, nnz/row (ave)=103, block size 3, np=2 [0] PCGAMGCreateGraph_AGG(): 59.9888% nnz after filtering, with threshold 0., 44.6631 nnz ave. (N=338838, max row size 2806 [0] PCGAMGCreateGraph_AGG(): Filtering left 59.9888 % edges in graph (6.810102e+07 4.539222e+06) [0] PCGAMGSquareGraph_GAMG(): (null): Square Graph on level 1 [0] fixAggregatesWithSquare(): isMPI = yes [0] PCGAMGProlongator_AGG(): (null): New grid 3761 nodes [0] PCGAMGOptProlongator_AGG(): (null): Smooth P0: max eigen=3.719739e+00 min=9.648633e-02 PC=jacobi [0] PCGAMGOptProlongator_AGG(): (null): Smooth P0: level 0, cache spectra 0.0964863 3.71974 [0] PCGAMGCreateLevel_GAMG(): (null): Coarse grid reduction from 2 to 2 active processes [0] PCSetUp_GAMG(): (null): 1) N=11283, n data cols=3, nnz/row (ave)=1058, 2 active pes [0] PCGAMGCreateGraph_AGG(): 100.% nnz after filtering, with threshold 0., 338.274 nnz ave. (N=3761, max row size 796 [0] PCGAMGCreateGraph_AGG(): Filtering left 100. % edges in graph (3.586383e+06 3.984870e+05) [0] PCGAMGProlongator_AGG(): (null): New grid 25 nodes [0] PCGAMGOptProlongator_AGG(): (null): Smooth P0: max eigen=1.665382e+00 min=5.630572e-02 PC=jacobi [0] PCGAMGOptProlongator_AGG(): (null): Smooth P0: level 1, cache spectra 0.0563057 1.66538 [0] PCGAMGCreateLevel_GAMG(): (null): Force coarsest grid reduction to 1 active processes [0] PCGAMGCreateLevel_GAMG(): (null): Number of equations (loc) 6 with simple aggregation [0] PCSetUp_GAMG(): (null): 2) N=75, n data cols=3, nnz/row (ave)=60, 1 active pes [0] PCSetUp_GAMG(): (null): 3 levels, operator complexity = 1.11374 [0] PCSetUp_GAMG(): (null): PCSetUp_GAMG: call KSPChebyshevSetEigenvalues on level 1 (N=11283) with emax = 1.66538 emin = 0.0563057 [0] PCSetUp_GAMG(): (null): PCSetUp_GAMG: call KSPChebyshevSetEigenvalues on level 0 (N=1016514) with emax = 3.71974 emin = 0.0964863 [0] PCSetUp_MG(): Using outer operators to define finest grid operator because PCMGGetSmoother(pc,nlevels-1,&ksp);KSPSetOperators(ksp,...); was not called. KSP Object: 2 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: gamg type is MULTIPLICATIVE, levels=3 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = 0. 0. 0. Threshold scaling factor for each level not specified = 1. AGG specific options Number of levels of aggressive coarsening 1 Square graph aggressive coarsening Number smoothing steps 1 Complexity: grid = 1.01117 operator = 1.11374 Coarse grid solver -- level 0 ------------------------------- KSP Object: (mg_coarse_) 2 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 2 MPI processes type: bjacobi number of blocks = 2 Local solver information for first block is in the following KSP and PC objects on rank 0: Use -mg_coarse_ksp_view ::ascii_info_detail to display information for all blocks KSP Object: (mg_coarse_sub_) 1 MPI process type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI process type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5., needed 1.0519 Factored matrix follows: Mat Object: (mg_coarse_sub_) 1 MPI process type: seqaij rows=75, cols=75, bs=3 package used to perform factorization: petsc total: nonzeros=4743, allocated nonzeros=4743 using I-node routines: found 20 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: (mg_coarse_sub_) 1 MPI process type: seqaij rows=75, cols=75, bs=3 total: nonzeros=4509, allocated nonzeros=4509 total number of mallocs used during MatSetValues calls=0 using I-node routines: found 23 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=75, cols=75, bs=3 total: nonzeros=4509, allocated nonzeros=4509 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 23 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 2 MPI processes type: chebyshev Chebyshev polynomial of first kind eigenvalue targets used: min 0.166538, max 1.83192 eigenvalues provided (min 0.0563057, max 1.66538) with transform: [0. 0.1; 0. 1.1] maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_1_) 2 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=11283, cols=11283, bs=3 total: nonzeros=11938689, allocated nonzeros=11938689 total number of mallocs used during MatSetValues calls=0 using nonscalable MatPtAP() implementation using I-node (on process 0) routines: found 1158 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 2 MPI processes type: chebyshev Chebyshev polynomial of first kind eigenvalue targets used: min 0.371974, max 4.09171 eigenvalues provided (min 0.0964863, max 3.71974) with transform: [0. 0.1; 0. 1.1] maximum iterations=2, nonzero initial guess tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_2_) 2 MPI processes type: jacobi type DIAGONAL linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=1016514, cols=1016514, bs=3 total: nonzeros=105006726, allocated nonzeros=105006726 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 169419 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=1016514, cols=1016514, bs=3 total: nonzeros=105006726, allocated nonzeros=105006726 total number of mallocs used during MatSetValues calls=0 using I-node (on process 0) routines: found 169419 nodes, limit used is 5 KSP type: cg Number of iterations = 30 Residual norm 8.85942e-08 PetscMemoryGetMaximumUsage 1.51822e+10 PetscMallocGetMaximumUsage 4.54501e+10 **************************************************************************************************************************************************************** *** WIDEN YOUR WINDOW TO 160 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** **************************************************************************************************************************************************************** ------------------------------------------------------------------ PETSc Performance Summary: ------------------------------------------------------------------ ./ex1 on a real-double-int32 named cdcu22apatel01 with 2 processor(s), by Unknown on Thu Apr 18 21:49:55 2024 Using Petsc Development GIT revision: unknown GIT Date: unknown Max Max/Min Avg Total Time (sec): 1.196e+02 1.000 1.196e+02 Objects: 0.000e+00 0.000 0.000e+00 Flops: 3.274e+10 1.433 2.779e+10 5.559e+10 Flops/sec: 2.737e+08 1.433 2.324e+08 4.648e+08 Memory (bytes): 3.239e+10 2.479 2.273e+10 4.545e+10 MPI Msg Count: 5.210e+02 1.006 5.195e+02 1.039e+03 MPI Msg Len (bytes): 2.577e+09 1.000 4.960e+06 5.153e+09 MPI Reductions: 5.040e+02 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 2.6162e-03 0.0% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 5.000e+00 1.0% 1: Load system: 1.7404e+00 1.5% 1.0165e+06 0.0% 1.200e+01 1.2% 3.762e+07 8.8% 3.800e+01 7.5% 2: KSPSetUp: 1.0793e+02 90.3% 1.7381e+10 31.3% 2.790e+02 26.9% 1.400e+07 75.8% 3.110e+02 61.7% 3: KSPSolve: 9.7747e+00 8.2% 3.7994e+10 68.3% 7.460e+02 71.8% 1.059e+06 15.3% 1.310e+02 26.0% 4: Cleanup: 1.4218e-01 0.1% 2.1306e+08 0.4% 2.000e+00 0.2% 2.514e+06 0.1% 1.000e+00 0.2% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) Memory usage is summed over all MPI processes, it is given in mega-bytes Malloc Mbytes: Memory allocated and kept during event (sum over all calls to event). May be negative EMalloc Mbytes: extra memory allocated during event and then freed (maximum over all calls to events). Never negative MMalloc Mbytes: Increase in high water mark of allocated memory (sum over all calls to event). Never negative RMI Mbytes: Increase in resident memory (sum over all calls to event) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Malloc EMalloc MMalloc RMI Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s Mbytes Mbytes Mbytes Mbytes ----------------------------------------------------------------------------------------------------------------------------------------------------- --- Event Stage 0: Main Stage PetscBarrier 1 1.0 1.3402e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 --- Event Stage 1: Load system BuildTwoSided 1 1.0 7.5835e-03 255.3 0.00e+00 0.0 2.0e+00 4.0e+00 1.0e+00 0 0 0 0 0 0 0 17 0 3 0 0 0 0 0 MatAssemblyBegin 1 1.0 3.0979e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 1 1.0 6.4604e-01 1.7 0.00e+00 0.0 4.0e+00 6.3e+05 5.0e+00 0 0 0 0 1 30 0 33 1 13 0 33 0 25 41 MatLoad 1 1.0 1.7329e+00 1.0 0.00e+00 0.0 1.0e+01 4.5e+07 2.2e+01 1 0 1 9 4 99 0 83 99 58 0 1330 1264 2594 1343 VecScale 1 1.0 1.3843e-03 1.1 5.08e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 100 0 0 0 734 0 0 0 4 VecLoad 1 1.0 3.0483e-03 1.0 0.00e+00 0.0 2.0e+00 2.0e+06 1.2e+01 0 0 0 0 2 0 0 17 1 32 0 0 4 0 4 SFSetGraph 1 1.0 3.5619e-03 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 -3 3 0 0 SFSetUp 1 1.0 9.3200e-03 2.3 0.00e+00 0.0 4.0e+00 6.3e+05 1.0e+00 0 0 0 0 0 0 0 33 1 3 0 8 0 2 4 --- Event Stage 2: KSPSetUp BuildTwoSided 72 1.0 2.3448e+01 4.4 0.00e+00 0.0 5.8e+01 4.0e+00 7.2e+01 12 0 6 0 14 13 0 21 0 23 0 0 0 0 0 BuildTwoSidedF 46 1.0 2.4908e+01 4.6 0.00e+00 0.0 3.0e+01 7.0e+07 4.6e+01 13 0 3 41 9 14 0 11 54 15 0 6389 0 6063 6378 MatMult 20 1.0 6.8312e-01 1.2 1.43e+09 1.6 4.0e+01 1.3e+06 0.0e+00 1 4 4 1 0 1 13 14 1 0 3409 1 0 0 0 MatScale 6 1.0 1.9525e-02 1.1 3.08e+07 1.3 4.0e+00 4.3e+05 0.0e+00 0 0 0 0 0 0 0 1 0 0 2799 0 0 0 0 MatAssemblyBegin 47 1.0 2.5276e+01 1.1 0.00e+00 0.0 3.0e+01 7.0e+07 2.2e+01 20 0 3 41 4 23 0 11 54 7 0 12778 7616 19950 12815 MatAssemblyEnd 47 1.0 2.1347e+01 1.1 2.86e+06 6.9 6.6e+01 6.4e+04 1.0e+02 17 0 6 0 20 19 0 24 0 32 0 -23509 22900 0 -23472 MatSetValues 1761290 1.2 8.0371e+00 1.7 9.02e+06 1.5 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 6 0 0 0 0 2 10844 0 10565 10812 MatGetRow 3440063 1.0 7.8771e+00 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 6 0 0 0 0 7 0 0 0 0 0 0 0 0 1 MatCreateSubMat 2 1.0 3.7055e-03 1.0 0.00e+00 0.0 7.0e+00 7.3e+03 3.0e+01 0 0 1 0 6 0 0 3 0 10 0 6 0 0 0 MatCoarsen 2 1.0 2.3660e-01 1.2 0.00e+00 0.0 3.7e+01 1.3e+05 3.6e+01 0 0 4 0 7 0 0 13 0 12 0 25 16 0 0 MatZeroEntries 2 1.0 3.7308e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 4 1.0 1.0333e+01 1.0 9.02e+06 1.5 8.0e+00 1.1e+05 1.4e+01 9 0 1 0 3 10 0 3 0 5 1 0 163 0 20 MatTranspose 9 1.0 1.1636e+00 1.0 0.00e+00 0.0 3.5e+01 2.1e+06 2.7e+01 1 0 3 1 5 1 0 13 2 9 0 444 218 0 330 MatMatMultSym 6 1.0 2.0108e+00 1.0 0.00e+00 0.0 2.0e+01 9.8e+05 2.0e+01 2 0 2 0 4 2 0 7 1 6 0 429 1956 0 246 MatMatMultNum 6 1.0 1.5116e+00 1.0 4.09e+09 1.1 4.0e+00 2.8e+06 2.0e+00 1 14 0 0 0 1 46 1 0 1 5236 122 0 0 121 MatPtAPSymbolic 2 1.0 5.0937e+00 1.0 0.00e+00 0.0 2.4e+01 6.2e+06 1.4e+01 4 0 2 3 3 5 0 9 4 5 0 3615 1847 0 3292 MatPtAPNumeric 2 1.0 3.6189e+00 1.0 7.42e+09 1.1 1.6e+01 8.8e+06 1.2e+01 3 25 2 3 2 3 81 6 4 4 3906 123 614 0 185 MatTrnMatMultSym 1 1.0 6.2833e+01 1.0 0.00e+00 0.0 1.4e+01 2.4e+08 1.3e+01 53 0 1 66 3 58 0 5 88 4 0 11842 32051 42856 11783 MatGetLocalMat 7 1.0 7.0262e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 333 1 0 216 MatGetBrAoCol 6 1.0 1.3353e-01 1.2 0.00e+00 0.0 2.8e+01 5.1e+06 0.0e+00 0 0 3 3 0 0 0 10 4 0 0 237 36 0 2 VecMDot 20 1.0 1.4055e-01 11.7 5.68e+07 1.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 4 0 1 0 0 6 804 0 0 0 0 VecNorm 22 1.0 7.8146e-03 2.3 1.14e+07 1.0 0.0e+00 0.0e+00 2.2e+01 0 0 0 0 4 0 0 0 0 7 2893 0 0 0 0 VecScale 22 1.0 2.8333e-03 1.9 5.68e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3990 0 0 0 0 VecCopy 2 1.0 4.7751e-04 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VecSet 2 1.0 3.0599e-04 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 2 1.0 3.6930e-04 1.3 1.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5566 0 0 0 0 VecMAXPY 22 1.0 1.4176e-02 1.0 6.71e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 9426 0 0 0 0 VecAssemblyBegin 25 1.0 9.0629e-02 24.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 5 0 0 0 0 8 0 1 0 0 0 VecAssemblyEnd 25 1.0 7.7121e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 VecPointwiseMult 22 1.0 5.2233e-03 1.1 5.68e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2164 0 0 0 0 VecSetValues 2397903 1.0 4.7190e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0 0 0 0 0 VecScatterBegin 48 1.0 1.0439e-02 1.4 0.00e+00 0.0 8.5e+01 9.4e+05 0.0e+00 0 0 8 2 0 0 0 30 2 0 0 4 0 0 14 VecScatterEnd 48 1.0 1.6486e-01 48.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 VecNormalize 22 1.0 1.1056e-02 2.1 1.70e+07 1.0 0.0e+00 0.0e+00 2.2e+01 0 0 0 0 4 0 0 0 0 7 3068 0 0 0 0 SFSetGraph 26 1.0 5.2237e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 -4 1 0 0 SFSetUp 26 1.0 5.5844e-02 3.7 0.00e+00 0.0 8.6e+01 7.0e+04 2.6e+01 0 0 8 0 5 0 0 31 0 8 0 18 0 0 6 SFBcastBegin 8 1.0 5.7436e-04 1.3 0.00e+00 0.0 1.6e+01 2.3e+05 0.0e+00 0 0 2 0 0 0 0 6 0 0 0 1 0 0 0 SFBcastEnd 8 1.0 8.3393e-02 351.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 SFReduceBegin 3 1.0 2.5286e-04 2.0 0.00e+00 0.0 5.0e+00 1.7e+05 0.0e+00 0 0 0 0 0 0 0 2 0 0 0 1 0 0 1 SFReduceEnd 3 1.0 1.6892e-04 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 SFPack 59 1.0 3.1681e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 59 1.0 3.5122e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 6 1.0 7.3763e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 90 8 0 29 KSPGMRESOrthog 20 1.0 1.5290e-01 6.3 1.14e+08 1.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 4 0 1 0 0 6 1479 0 0 0 0 PCSetUp_GAMG+ 1 1.0 1.0793e+02 1.0 9.36e+09 1.2 2.8e+02 1.4e+07 3.1e+02 90 31 27 76 61 100 100 100 100 99 161 3977 40102 42856 4071 PCGAMGCreateG 2 1.0 1.6696e+01 1.0 2.39e+07 1.6 5.6e+01 1.4e+06 6.4e+01 14 0 5 2 13 15 0 20 2 21 2 175 381 0 288 GAMG Coarsen 4 1.0 6.6443e+01 1.0 0.00e+00 0.0 6.1e+01 5.6e+07 5.4e+01 56 0 6 67 11 62 0 22 88 17 0 8578 35334 42856 8789 GAMG MIS/Agg 2 1.0 6.3080e+01 1.0 0.00e+00 0.0 5.1e+01 6.7e+07 5.0e+01 53 0 5 66 10 58 0 18 88 16 0 8731 35178 42856 8851 PCGAMGProl 2 1.0 6.9764e+00 1.0 0.00e+00 0.0 4.2e+01 5.8e+05 4.0e+01 6 0 4 0 8 6 0 15 1 13 0 96 89 0 -7 GAMG Prol-col 2 1.0 6.7786e+00 1.0 0.00e+00 0.0 3.0e+01 6.1e+05 2.0e+01 6 0 3 0 4 6 0 11 0 6 0 43 8 0 14 GAMG Prol-lift 2 1.0 1.8799e-01 1.0 0.00e+00 0.0 1.2e+01 5.0e+05 1.2e+01 0 0 1 0 2 0 0 4 0 4 0 -11 46 0 -21 PCGAMGOptProl 2 1.0 8.9854e+00 1.0 1.92e+09 1.5 6.4e+01 1.3e+06 6.6e+01 8 6 6 2 13 8 18 23 2 21 357 98 918 0 190 GAMG smooth 2 1.0 8.2316e+00 1.0 3.42e+08 1.4 2.4e+01 1.3e+06 2.4e+01 7 1 2 1 5 8 3 9 1 8 71 97 918 0 187 PCGAMGCreateL 2 1.0 8.7151e+00 1.0 7.42e+09 1.1 5.6e+01 5.1e+06 8.1e+01 7 25 5 6 16 8 81 20 7 26 1622 3725 1724 0 3477 GAMG PtAP 2 1.0 8.7103e+00 1.0 7.42e+09 1.1 4.0e+01 7.2e+06 2.6e+01 7 25 4 6 5 8 81 14 7 8 1623 3738 1724 0 3477 GAMG Reduce 1 1.0 4.8170e-03 1.0 0.00e+00 0.0 1.6e+01 3.5e+03 5.5e+01 0 0 2 0 11 0 0 6 0 18 0 -12 13 0 0 PCGAMG Squ l00 1 1.0 6.2833e+01 1.0 0.00e+00 0.0 1.4e+01 2.4e+08 1.3e+01 53 0 1 66 3 58 0 5 88 4 0 11842 32051 42856 11783 PCGAMG Gal l00 1 1.0 8.4045e+00 1.0 7.27e+09 1.1 2.0e+01 1.4e+07 1.3e+01 7 25 2 6 3 8 79 7 7 4 1626 3722 1724 0 3476 PCGAMG Opt l00 1 1.0 8.1283e-01 1.0 3.05e+08 1.7 1.2e+01 2.5e+06 1.1e+01 1 1 1 1 2 1 3 4 1 4 593 362 651 0 186 PCGAMG Gal l01 1 1.0 3.0583e-01 1.0 3.25e+08 2.2 2.0e+01 1.3e+05 1.3e+01 0 1 2 0 3 0 3 7 0 4 1543 16 97 0 1 PCGAMG Opt l01 1 1.0 7.8367e-02 1.0 5.01e+07 2.3 1.2e+01 6.0e+04 1.1e+01 0 0 1 0 2 0 0 4 0 4 914 6 93 0 0 PCSetUp 1 1.0 1.0793e+02 1.0 9.36e+09 1.2 2.8e+02 1.4e+07 3.1e+02 90 31 27 76 61 100 100 100 100 99 161 3977 40102 42856 4071 --- Event Stage 3: KSPSolve MatMult 278 1.0 9.1579e+00 1.1 2.18e+10 1.6 5.6e+02 1.4e+06 0.0e+00 7 63 54 15 0 91 93 75 99 0 3838 0 0 0 0 MatMultAdd 62 1.0 2.9618e-01 1.8 4.60e+08 1.1 9.3e+01 2.3e+04 0.0e+00 0 2 9 0 0 2 2 12 0 0 3000 0 0 0 0 MatMultTranspose 62 1.0 4.5168e-01 2.3 4.60e+08 1.1 9.3e+01 2.3e+04 0.0e+00 0 2 9 0 0 3 2 12 0 0 1968 0 0 0 0 MatSolve 31 0.0 2.5349e-04 0.0 2.92e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1151 0 0 0 0 MatLUFactorSym 1 1.0 5.7352e-05 6.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1 1.0 4.9729e-05 16.9 1.82e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3667 0 0 0 0 MatResidual 62 1.0 1.8908e+00 1.0 4.44e+09 1.6 1.2e+02 1.3e+06 0.0e+00 2 13 12 3 0 19 19 17 20 0 3835 0 0 0 0 MatGetRowIJ 1 0.0 8.3814e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 0.0 4.8158e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MatView 6 1.5 9.2030e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 1 0 0 0 0 3 0 0 0 0 0 VecTDot 62 1.0 7.1619e-02 3.6 6.30e+07 1.0 0.0e+00 0.0e+00 6.2e+01 0 0 0 0 12 0 0 0 0 47 1760 0 0 0 0 VecNorm 31 1.0 5.6153e-02 7.9 3.15e+07 1.0 0.0e+00 0.0e+00 3.1e+01 0 0 0 0 6 0 0 0 0 24 1122 0 0 0 0 VecCopy 188 1.0 1.7059e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VecSet 215 1.0 6.4117e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 60 1.0 2.3550e-02 1.0 6.10e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5180 0 0 0 0 VecAYPX 401 1.0 7.4757e-02 1.0 1.57e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 1 0 0 0 4198 0 0 0 0 VecAXPBYCZ 124 1.0 2.6510e-02 1.0 1.60e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 12019 0 0 0 0 VecPointwiseMult 248 1.0 5.6928e-02 1.0 6.40e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 2239 0 0 0 0 VecScatterBegin 402 1.0 3.3588e-02 1.2 0.00e+00 0.0 7.4e+02 1.1e+06 0.0e+00 0 0 71 15 0 0 0 99 100 0 0 0 0 0 0 VecScatterEnd 402 1.0 3.9628e+00 6.1 2.40e+05 9.6 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 24 0 0 0 0 0 0 0 0 0 SFPack 402 1.0 1.6715e-02 18.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 402 1.0 3.1857e-03 1.4 2.40e+05 9.6 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 83 0 0 0 0 KSPSetUp 1 1.0 2.6737e-06 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 9.7745e+00 1.0 2.32e+10 1.6 7.4e+02 1.1e+06 1.2e+02 8 68 71 15 25 100 100 99 100 95 3887 9 0 0 1 PCSetUp 1 1.0 2.0539e-04 2.1 1.82e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 888 0 0 0 0 PCSetUpOnBlocks 31 1.0 3.2662e-04 1.7 1.82e+05 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 558 0 0 0 0 PCApply 31 1.0 8.0865e+00 1.0 1.90e+10 1.5 6.8e+02 9.4e+05 3.1e+01 7 56 66 12 6 82 83 91 81 24 3877 8 0 0 1 --- Event Stage 4: Cleanup MatMult 1 1.0 5.4320e-02 1.0 1.36e+08 1.9 2.0e+00 2.5e+06 0.0e+00 0 0 0 0 0 38 98 100 100 0 3848 0 0 0 0 VecNorm 1 1.0 1.8212e-03 14.1 1.02e+06 1.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 1 1 0 0 100 1116 0 0 0 0 VecAXPY 1 1.0 3.2528e-04 1.1 1.02e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 6250 0 0 0 0 VecScatterBegin 1 1.0 3.0507e-04 2.3 0.00e+00 0.0 2.0e+00 2.5e+06 0.0e+00 0 0 0 0 0 0 0 100 100 0 0 0 0 0 0 VecScatterEnd 1 1.0 2.7569e-02 140.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 10 0 0 0 0 0 0 0 0 0 SFPack 1 1.0 1.5096e-04 65.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 1 1.0 1.2529e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ----------------------------------------------------------------------------------------------------------------------------------------------------- Object Type Creations Destructions. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 0 1 --- Event Stage 1: Load system Viewer 2 2 Matrix 3 0 Vector 4 1 Index Set 2 2 Star Forest Graph 1 0 --- Event Stage 2: KSPSetUp Container 15 11 Viewer 1 0 Matrix 87 66 Matrix Coarsen 2 2 Vector 131 110 Index Set 55 51 Star Forest Graph 30 26 Krylov Solver 7 2 Preconditioner 7 2 PetscRandom 2 2 Distributed Mesh 2 2 Discrete System 2 2 Weak Form 2 2 --- Event Stage 3: KSPSolve Viewer 1 1 Matrix 2 1 Vector 2 0 Index Set 5 2 Star Forest Graph 4 0 Distributed Mesh 2 0 Discrete System 2 0 Weak Form 2 0 --- Event Stage 4: Cleanup Container 0 4 Matrix 0 25 Vector 1 27 Index Set 0 7 Star Forest Graph 0 9 Krylov Solver 0 5 Preconditioner 0 5 Distributed Mesh 0 2 Discrete System 0 2 Weak Form 0 2 ======================================================================================================================== Average time to get PetscTime(): 1.22003e-08 Average time for MPI_Barrier(): 2.59141e-07 Average time for zero size MPI_Send(): 1.85153e-06 #PETSc Option Table entries: -A_name matrix.dat # (source: command line) -b_name vector.dat # (source: command line) -info :pc # (source: command line) -ksp_type cg # (source: command line) -ksp_view # (source: command line) -log_view # (source: command line) -log_view_memory # (source: command line) -malloc_view # (source: command line) -matload_block_size 3 # (source: file) -pc_gamg_aggressive_square_graph true # (source: command line) -pc_gamg_coarse_eq_limit 1000 # (source: command line) -pc_gamg_threshold 0.0 # (source: command line) -pc_type gamg # (source: command line) -vecload_block_size 3 # (source: file) #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: COPTFLAGS=-O3 CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=real-double-int32 --with-fc=mpiifort --with-cc=mpicc --with-cxx=mpicxx --with-blaslapack-dir=/opt/intel/oneapi/mkl/2022.1.0 --with-debugging=no --with-scalar-type=real --with-precision=double --with-64-bit-indices=no --with-avx512-kernels --with-cxx-dialect=C++11 --download-eigen --download-hdf5 --download-hypre --download-metis --download-mumps --download-parmetis --download-scalapack --download-slepc ----------------------------------------- Libraries compiled on 2024-04-15 17:25:13 on buildkitsandbox Machine characteristics: Linux-6.5.0-27-generic-x86_64-with-glibc2.29 Using PETSc directory: /opt/onscale/petsc-3.22.0.0415 Using PETSc arch: real-double-int32 ----------------------------------------- Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-unknown-pragmas -Wno-lto-type-mismatch -fstack-protector -fvisibility=hidden -O3 Using Fortran compiler: mpiifort -fPIC -O3 ----------------------------------------- Using include paths: -I/opt/onscale/petsc-3.22.0.0415/include -I/opt/onscale/petsc-3.22.0.0415/real-double-int32/include -I/opt/onscale/petsc-3.22.0.0415/real-double-int32/include/eigen3 ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/opt/onscale/petsc-3.22.0.0415/real-double-int32/lib -L/opt/onscale/petsc-3.22.0.0415/real-double-int32/lib -lpetsc -Wl,-rpath,/opt/onscale/petsc-3.22.0.0415/real-double-int32/lib -L/opt/onscale/petsc-3.22.0.0415/real-double-int32/lib -Wl,-rpath,/opt/intel/oneapi/mkl/2022.1.0/lib/intel64 -L/opt/intel/oneapi/mkl/2022.1.0/lib/intel64 -Wl,-rpath,/opt/intel/oneapi/mpi/2021.6.0/lib/release -L/opt/intel/oneapi/mpi/2021.6.0/lib/release -Wl,-rpath,/opt/intel/oneapi/mpi/2021.6.0/lib -L/opt/intel/oneapi/mpi/2021.6.0/lib -Wl,-rpath,/opt/intel/oneapi/mpi/2021.6.0/libfabric/lib -L/opt/intel/oneapi/mpi/2021.6.0/libfabric/lib -Wl,-rpath,/opt/intel/oneapi/compiler/2022.1.0/linux/compiler/lib/intel64_lin -L/opt/intel/oneapi/compiler/2022.1.0/linux/compiler/lib/intel64_lin -Wl,-rpath,/opt/intel/oneapi/compiler/2022.1.0/linux/lib -L/opt/intel/oneapi/compiler/2022.1.0/linux/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/9 -L/usr/lib/gcc/x86_64-linux-gnu/9 -lHYPRE -ldmumps -lmumps_common -lpord -lpthread -lscalapack -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lparmetis -lmetis -lhdf5_hl -lhdf5 -lm -lX11 -lmpifort -lmpi -ldl -lrt -lpthread -lifport -lifcoremt_pic -limf -lsvml -lm -lipgo -lirc -lgcc_s -lirc_s -lstdc++ ----------------------------------------- [0] Maximum memory PetscMalloc()ed 32387548912 maximum size of entire process 8270635008 [0] Memory usage sorted by function [0] 11 8624 ISCreate() [0] 6 96 ISCreate_General() [0] 5 80 ISCreate_Stride() [0] 5 4583296 ISGeneralSetIndices_General() [0] 5 4583296 ISGetIndices_Stride() [0] 1 32 KSPConvergedDefaultCreate() [0] 1 1520 KSPCreate() [0] 1 80 KSPCreate_CG() [0] 2 1355360 MatAXPY_MPIAIJ() [0] 5 2116480 MatCheckCompressedRow() [0] 1 560 MatCoarsenCreate() [0] 23 66608 MatCreate() [0] 2 1389040 MatCreateGraph_Simple_AIJ() [0] 6 3745152 MatCreateSeqAIJWithArrays() [0] 6 8352 MatCreate_MPIAIJ() [0] 17 28016 MatCreate_SeqAIJ() [0] 2 85600 MatGetRow_MPIAIJ() [0] 2 819245280 MatLoad_MPIAIJ_Binary() [0] 3 55148368 MatMPIAIJGetLocalMat() [0] 2 6099120 MatMPIAIJSetPreallocationCSR_MPIAIJ() [0] 15 12554992 MatMarkDiagonal_SeqAIJ() [0] 4 1872608 MatMatMultSymbolic_SeqAIJ_SeqAIJ_Sorted() [0] 1 112 MatProductCreate_Private() [0] 10 320 MatRegisterRootName() [0] 8 6616352 MatSeqAIJCheckInode() [0] 48 5319301648 MatSeqAIJSetPreallocation_SeqAIJ() [0] 4 2389792 MatSetSeqAIJWithArrays_private() [0] 5 4583296 MatSetUpMultiply_MPIAIJ() [0] 26 1536 MatSolverTypeRegister() [0] 12 192 MatStashCreate_Private() [0] 2 96 MatStashScatterBegin_BTS() [0] 2 7694605360 MatStashSortCompress_Private() [0] 6 2390160 MatTransposeMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable() [0] 4 128 MatTransposeSetPrecursor() [0] 2 11491312 MatTranspose_MPIAIJ() [0] 9 57538176 MatTranspose_SeqAIJ() [0] 1 752 PCCreate() [0] 1 32 PCCreateGAMG_AGG() [0] 2 368 PCCreate_GAMG() [0] 1 288 PCCreate_MG() [0] 2 2033040 PCGAMGCoarsen_AGG() [0] 2 1389040 PCGAMGCreateGraph_AGG() [0] 1 12198176 PCSetCoordinates_AGG() [0] 4 43312 PetscBTCreate() [0] 5 80 PetscCommBuildTwoSidedFReq_Reference() [0] 22 688 PetscCommBuildTwoSided_Allreduce() [0] 4 96 PetscCommDuplicate() [0] 4 1920 PetscContainerCreate() [0] 2 1194944 PetscFreeSpaceGet() [0] 61 976 PetscFunctionListCreate_Private() [0] 4 144 PetscGatherMessageLengths2() [0] 1 32 PetscGatherNumberOfMessages() [0] 2 528 PetscIntStackCreate() [0] 1 2389808 PetscLLCondensedCreate() [0] 79 6320 PetscLayoutCreate() [0] 66 1056 PetscLayoutSetUp() [0] 2 4112 PetscLogActionArrayCreate() [0] 2 2064 PetscLogClassArrayCreate() [0] 6 9264 PetscLogClassPerfArrayCreate() [0] 2 2064 PetscLogEventArrayCreate() [0] 2 12288 PetscLogEventArrayRecapacity() [0] 6 101424 PetscLogEventPerfArrayCreate() [0] 3 270336 PetscLogEventPerfArrayRecapacity() [0] 1 64 PetscLogHandlerContextCreate_Default() [0] 1 544 PetscLogHandlerCreate() [0] 2 9744 PetscLogObjectArrayCreate() [0] 1 32 PetscLogRegistryCreate() [0] 2 80 PetscLogStageArrayCreate() [0] 2 2320 PetscLogStageInfoArrayCreate() [0] 1 48 PetscLogStateCreate() [0] 48 10526401536 PetscMatStashSpaceGet() [0] 9 1440 PetscObjectComposedDataIncrease_() [0] 4 1152 PetscObjectListAdd() [0] 10 192 PetscOptionsGetEList() [0] 1 16 PetscOptionsHelpPrintedCreate() [0] 2 64 PetscOptionsInsertFilePetsc() [0] 6 12160240 PetscPostIrecvInt() [0] 1 32 PetscPushErrorHandler() [0] 1 32 PetscPushSignalHandler() [0] 1 544 PetscRandomCreate() [0] 1 16 PetscRandomCreate_Rander48() [0] 6 5856 PetscSFCreate() [0] 12 768 PetscSFCreatePackOpt() [0] 6 864 PetscSFCreate_Basic() [0] 6 482848 PetscSFLinkCreate_MPI() [0] 1 1355360 PetscSFSetGraphLayout() [0] 24 10522528 PetscSFSetUpRanks() [0] 24 1219072 PetscSFSetUp_Basic() [0] 7388 7707249168 PetscSegBufferAlloc_Private() [0] 46 141387072 PetscSegBufferCreate() [0] 2 2895988912 PetscSegBufferExtractAlloc() [0] 1 16 PetscStrNArrayallocpy() [0] 1700 44304 PetscStrallocpy() [0] 42 91536 PetscStrreplace() [0] 6 192 PetscTokenCreate() [0] 1 16 PetscViewerASCIIOpen() [0] 4 448967584 PetscViewerBinaryWriteReadAll() [0] 3 1920 PetscViewerCreate() [0] 1 96 PetscViewerCreate_ASCII() [0] 2 192 PetscViewerCreate_Binary() [0] 5 7760 VecCreate() [0] 12 18624 VecCreateWithLayout_Private() [0] 16 13556816 VecCreate_MPI_Private() [0] 5 9166576 VecCreate_Seq() [0] 5 320 VecCreate_Seq_Private() [0] 2 12198944 VecDuplicateVecs_MPI_GEMV() [0] 10 13749872 VecScatterCreate() [0] 24 384 VecStashCreate_Private() [1] Maximum memory PetscMalloc()ed 13062517392 maximum size of entire process 6911524864 [1] Memory usage sorted by function [1] 14 10976 ISCreate() [1] 7 112 ISCreate_General() [1] 6 96 ISCreate_Stride() [1] 6 1450576 ISGeneralSetIndices_General() [1] 6 1450576 ISGetIndices_Stride() [1] 1 32 KSPConvergedDefaultCreate() [1] 1 1520 KSPCreate() [1] 1 80 KSPCreate_CG() [1] 2 1355360 MatAXPY_MPIAIJ() [1] 1 1034448 MatCheckCompressedRow() [1] 7 4173072 MatCoarsenApply_MIS_private() [1] 1 560 MatCoarsenCreate() [1] 23 66608 MatCreate() [1] 2 1355728 MatCreateGraph_Simple_AIJ() [1] 6 2902976 MatCreateSeqAIJWithArrays() [1] 6 8352 MatCreate_MPIAIJ() [1] 17 28016 MatCreate_SeqAIJ() [1] 2 1248 MatGetRow_MPIAIJ() [1] 2 444901520 MatLoad_MPIAIJ_Binary() [1] 3 33436096 MatMPIAIJGetLocalMat() [1] 2 6099120 MatMPIAIJSetPreallocationCSR_MPIAIJ() [1] 17 13068176 MatMarkDiagonal_SeqAIJ() [1] 4 1451504 MatMatMultSymbolic_SeqAIJ_SeqAIJ_Sorted() [1] 1 112 MatProductCreate_Private() [1] 10 320 MatRegisterRootName() [1] 9 6872928 MatSeqAIJCheckInode() [1] 48 5276006416 MatSeqAIJSetPreallocation_SeqAIJ() [1] 4 1547616 MatSetSeqAIJWithArrays_private() [1] 6 1450576 MatSetUpMultiply_MPIAIJ() [1] 26 1536 MatSolverTypeRegister() [1] 12 192 MatStashCreate_Private() [1] 4 192 MatStashScatterBegin_BTS() [1] 2 100354160 MatStashSortCompress_Private() [1] 6 1547984 MatTransposeMatMultSymbolic_MPIAIJ_MPIAIJ_nonscalable() [1] 4 128 MatTransposeSetPrecursor() [1] 2 9939328 MatTranspose_MPIAIJ() [1] 9 34983712 MatTranspose_SeqAIJ() [1] 1 752 PCCreate() [1] 1 32 PCCreateGAMG_AGG() [1] 2 368 PCCreate_GAMG() [1] 1 288 PCCreate_MG() [1] 2 2033040 PCGAMGCoarsen_AGG() [1] 2 1355728 PCGAMGCreateGraph_AGG() [1] 1 12198176 PCSetCoordinates_AGG() [1] 4 43312 PetscBTCreate() [1] 2 1355424 PetscCDCreate() [1] 7048 1240432 PetscCDGetNewNode() [1] 8 128 PetscCommBuildTwoSidedFReq_Reference() [1] 31 976 PetscCommBuildTwoSided_Allreduce() [1] 4 96 PetscCommDuplicate() [1] 4 1920 PetscContainerCreate() [1] 2 1070064 PetscFreeSpaceGet() [1] 63 1008 PetscFunctionListCreate_Private() [1] 4 144 PetscGatherMessageLengths2() [1] 1 32 PetscGatherNumberOfMessages() [1] 2 528 PetscIntStackCreate() [1] 1 2140048 PetscLLCondensedCreate() [1] 86 6880 PetscLayoutCreate() [1] 70 1120 PetscLayoutSetUp() [1] 2 4112 PetscLogActionArrayCreate() [1] 2 2064 PetscLogClassArrayCreate() [1] 6 9264 PetscLogClassPerfArrayCreate() [1] 2 2064 PetscLogEventArrayCreate() [1] 2 12288 PetscLogEventArrayRecapacity() [1] 6 101424 PetscLogEventPerfArrayCreate() [1] 3 270336 PetscLogEventPerfArrayRecapacity() [1] 1 64 PetscLogHandlerContextCreate_Default() [1] 1 544 PetscLogHandlerCreate() [1] 2 9744 PetscLogObjectArrayCreate() [1] 1 32 PetscLogRegistryCreate() [1] 2 80 PetscLogStageArrayCreate() [1] 2 2320 PetscLogStageInfoArrayCreate() [1] 1 48 PetscLogStateCreate() [1] 34 122561088 PetscMatStashSpaceGet() [1] 10 1600 PetscObjectComposedDataIncrease_() [1] 4 1152 PetscObjectListAdd() [1] 10 192 PetscOptionsGetEList() [1] 1 16 PetscOptionsHelpPrintedCreate() [1] 2 64 PetscOptionsInsertFilePetsc() [1] 6 1531852672 PetscPostIrecvInt() [1] 1 32 PetscPushErrorHandler() [1] 1 32 PetscPushSignalHandler() [1] 1 544 PetscRandomCreate() [1] 1 16 PetscRandomCreate_Rander48() [1] 8 7808 PetscSFCreate() [1] 6 384 PetscSFCreatePackOpt() [1] 8 1152 PetscSFCreate_Basic() [1] 8 1197584 PetscSFLinkCreate_MPI() [1] 2 1105376 PetscSFSetGraphLayout() [1] 32 4007296 PetscSFSetUpRanks() [1] 32 6296048 PetscSFSetUp_Basic() [1] 301 6237869408 PetscSegBufferAlloc_Private() [1] 40 145822048 PetscSegBufferCreate() [1] 2 40845504 PetscSegBufferExtractAlloc() [1] 1 16 PetscStrNArrayallocpy() [1] 1707 44256 PetscStrallocpy() [1] 30 65392 PetscStrreplace() [1] 4 128 PetscTokenCreate() [1] 1 16 PetscViewerASCIIOpen() [1] 3 1920 PetscViewerCreate() [1] 1 96 PetscViewerCreate_ASCII() [1] 2 192 PetscViewerCreate_Binary() [1] 6 9312 VecCreate() [1] 13 20176 VecCreateWithLayout_Private() [1] 17 13557088 VecCreate_MPI_Private() [1] 6 2901136 VecCreate_Seq() [1] 6 384 VecCreate_Seq_Private() [1] 2 12198944 VecDuplicateVecs_MPI_GEMV() [1] 12 4351712 VecScatterCreate() [1] 26 416 VecStashCreate_Private()