0 KSP unpreconditioned resid norm 1.065032289254e+02 true resid norm 1.065032289254e+02 ||r(i)||/||b|| 1.000000000000e+00 1 KSP unpreconditioned resid norm 2.902583840777e+01 true resid norm 2.902583840777e+01 ||r(i)||/||b|| 2.725348207809e-01 2 KSP unpreconditioned resid norm 1.273394484880e+01 true resid norm 1.273394484880e+01 ||r(i)||/||b|| 1.195639322608e-01 3 KSP unpreconditioned resid norm 3.055264998277e+00 true resid norm 3.055264998277e+00 ||r(i)||/||b|| 2.868706450597e-02 4 KSP unpreconditioned resid norm 1.016957407167e+00 true resid norm 1.016957407167e+00 ||r(i)||/||b|| 9.548606342062e-03 5 KSP unpreconditioned resid norm 3.581995383508e-01 true resid norm 3.581995383508e-01 ||r(i)||/||b|| 3.363273977371e-03 6 KSP unpreconditioned resid norm 1.288317266099e-01 true resid norm 1.288317266099e-01 ||r(i)||/||b|| 1.209650898943e-03 7 KSP unpreconditioned resid norm 5.214480730500e-02 true resid norm 5.214480730502e-02 ||r(i)||/||b|| 4.896077596065e-04 8 KSP unpreconditioned resid norm 2.047871991236e-02 true resid norm 2.047871991238e-02 ||r(i)||/||b|| 1.922826201516e-04 9 KSP unpreconditioned resid norm 8.528114448127e-03 true resid norm 8.528114448106e-03 ||r(i)||/||b|| 8.007376428070e-05 10 KSP unpreconditioned resid norm 3.021467555387e-03 true resid norm 3.021467555382e-03 ||r(i)||/||b|| 2.836972724553e-05 11 KSP unpreconditioned resid norm 9.556797101864e-04 true resid norm 9.556797101815e-04 ||r(i)||/||b|| 8.973246349657e-06 Linear solve converged due to CONVERGED_RTOL iterations 11 KSP Object: 2 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: asm Additive Schwarz: total subdomain blocks = 2, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1.9, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=43875, cols=43875 package used to perform factorization: petsc total: nonzeros=36905625, allocated nonzeros=36905625 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 8775 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=43875, cols=43875 total: nonzeros=36905625, allocated nonzeros=36905625 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 8775 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 2 MPI processes type: mpiaij rows=64800, cols=64800 total: nonzeros=57736800, allocated nonzeros=57736800 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 6480 nodes, limit used is 5 PetscSolve converged by 2 its=11 error = 9.556797e-04 Cpu of petsc solve=8.941113948822021e+00 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /users/stoneszone/hpMusic_opt on a linux-gnu-c-opt named n361 with 2 processors, by stoneszone Thu Jun 25 15:11:51 2015 Using Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 Max Max/Min Avg Total Time (sec): 2.614e+01 1.00004 2.613e+01 Objects: 7.000e+01 1.00000 7.000e+01 Flops: 1.562e+10 1.01715 1.549e+10 3.099e+10 Flops/sec: 5.978e+08 1.01711 5.928e+08 1.186e+09 MPI Messages: 6.100e+01 1.00000 6.100e+01 1.220e+02 MPI Message Lengths: 1.236e+08 1.00000 2.027e+06 2.473e+08 MPI Reductions: 1.010e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 2.6135e+01 100.0% 3.0985e+10 100.0% 1.220e+02 100.0% 2.027e+06 100.0% 1.000e+02 99.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 24 1.0 7.7511e-01 1.0 1.38e+09 1.0 4.8e+01 8.9e+04 0.0e+00 3 9 39 2 0 3 9 39 2 0 3573 MatSolve 23 1.0 9.9079e-01 1.0 1.70e+09 1.0 0.0e+00 0.0e+00 0.0e+00 4 11 0 0 0 4 11 0 0 0 3414 MatLUFactorNum 1 1.0 5.4019e+00 1.0 1.25e+10 1.0 0.0e+00 0.0e+00 0.0e+00 21 80 0 0 0 21 80 0 0 0 4591 MatILUFactorSym 1 1.0 1.5135e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 1 1 0 0 0 1 0 MatAssemblyBegin 3 1.0 7.8905e-02704.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 4 0 0 0 0 4 0 MatAssemblyEnd 3 1.0 1.1572e-01 1.0 0.00e+00 0.0 4.0e+00 2.2e+04 8.0e+00 0 0 3 0 8 0 0 3 0 8 0 MatGetRowIJ 1 1.0 2.1458e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1 1.0 1.5956e+00 1.1 0.00e+00 0.0 1.0e+01 2.4e+07 7.0e+00 6 0 8 96 7 6 0 8 96 7 0 MatGetOrdering 1 1.0 7.7486e-04 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 4 0 0 0 0 4 0 MatIncreaseOvrlp 1 1.0 9.1987e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 2 0 0 0 0 2 0 MatView 3 3.0 8.8930e-05 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 1 0 0 0 0 1 0 VecMDot 11 1.0 1.9135e-03 1.1 4.28e+06 1.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 11 0 0 0 0 11 4470 VecNorm 26 1.0 5.7420e-0240.7 1.68e+06 1.0 0.0e+00 0.0e+00 2.6e+01 0 0 0 0 26 0 0 0 0 26 59 VecScale 12 1.0 2.4080e-04 1.0 3.89e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3229 VecCopy 26 1.0 9.4151e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 62 1.0 1.9221e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 13 1.0 3.9601e-04 1.0 8.42e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4254 VecAYPX 12 1.0 4.9305e-04 1.0 3.89e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1577 VecMAXPY 23 1.0 3.1154e-03 1.0 9.27e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5949 VecScatterBegin 70 1.0 3.2520e-03 1.0 0.00e+00 0.0 9.4e+01 8.9e+04 0.0e+00 0 0 77 3 0 0 0 77 3 0 0 VecScatterEnd 70 1.0 1.7963e-02 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 12 1.0 5.6956e-0274.6 1.17e+06 1.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 12 0 0 0 0 12 41 KSPGMRESOrthog 11 1.0 3.3350e-03 1.1 8.55e+06 1.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 11 0 0 0 0 11 5130 KSPSetUp 2 1.0 5.7626e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 8.8991e+00 1.0 1.56e+10 1.0 1.1e+02 2.3e+06 6.1e+01 34100 87100 60 34100 87100 61 3469 PCSetUp 2 1.0 7.1575e+00 1.0 1.25e+10 1.0 1.4e+01 1.7e+07 2.5e+01 27 80 11 96 25 27 80 11 96 25 3465 PCSetUpOnBlocks 1 1.0 5.5537e+00 1.0 1.25e+10 1.0 0.0e+00 0.0e+00 5.0e+00 21 80 0 0 5 21 80 0 0 5 4465 PCApply 23 1.0 9.9717e-01 1.0 1.70e+09 1.0 4.6e+01 8.9e+04 0.0e+00 4 11 38 2 0 4 11 38 2 0 3392 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 5 5 1234655228 0 Vector 47 47 12009520 0 Vector Scatter 2 2 2072 0 Index Set 10 10 614916 0 Krylov Solver 2 2 19600 0 Preconditioner 2 2 1824 0 Viewer 2 1 712 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 5.72205e-07 Average time for zero size MPI_Send(): 4.05312e-06 #PETSc Option Table entries: -ksp_atol 1e-50 -ksp_converged_reason -ksp_gmres_restart 30 -ksp_lgmres_augment 10 -ksp_max_it 100 -ksp_monitor_true_residual -ksp_pc_side right -ksp_rtol 1e-5 -ksp_type gmres -ksp_view -log_summary -pc_type asm -sub_pc_factor_fill 1.9 -sub_pc_factor_levels 0 -sub_pc_type ilu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Fri Sep 21 15:34:01 2012 Configure options: --download-f2cblaslapack=1 --download-mpicc=1 --download-mpich=1 --with-debugging=0 --with-cc=gcc --with-cxx=g++ --with-fc=0 --with-x=0 ----------------------------------------- Libraries compiled on Fri Sep 21 15:34:01 2012 on 3165CLinux1 Machine characteristics: Linux-3.2.0-3-amd64-x86_64-with-debian-wheezy-sid Using PETSc directory: /home/czhou/usr/petsc-3.3-p3-opt Using PETSc arch: linux-gnu-c-opt ----------------------------------------- Using C compiler: /home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} ----------------------------------------- Using include paths: -I/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/include -I/home/czhou/usr/petsc-3.3-p3-opt/include -I/home/czhou/usr/petsc-3.3-p3-opt/include -I/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/include ----------------------------------------- Using C linker: /home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/bin/mpicc Using libraries: -Wl,-rpath,/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -L/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -lpetsc -lpthread -Wl,-rpath,/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -L/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -lf2clapack -lf2cblas -lm -lm -ldl -----------------------------------------