0 KSP Residual norm 1.065032289254e+02 1 KSP Residual norm 2.714215388284e+01 2 KSP Residual norm 8.282151226835e+00 3 KSP Residual norm 2.796699106076e+00 4 KSP Residual norm 7.669078312660e-01 5 KSP Residual norm 2.081350508288e-01 6 KSP Residual norm 6.992182589517e-02 7 KSP Residual norm 2.098136986269e-02 8 KSP Residual norm 5.983310598632e-03 9 KSP Residual norm 1.688539604566e-03 10 KSP Residual norm 4.541704854547e-04 KSP Object: 1 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=100, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: asm Additive Schwarz: total subdomain blocks = 1, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 2.22045e-14 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1.9, needed 1 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=64800, cols=64800 package used to perform factorization: petsc total: nonzeros=57736800, allocated nonzeros=57736800 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 12960 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=64800, cols=64800 total: nonzeros=57736800, allocated nonzeros=57736800 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 12960 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=64800, cols=64800 total: nonzeros=57736800, allocated nonzeros=57736800 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 12960 nodes, limit used is 5 PetscSolve converged by 2 its=10 error = 4.541705e-04 Cpu of petsc solve=1.063185191154480e+01 solver[0] iter=1 t=1.0000e+00 dt=1.0000e+00 cpu=2.6416e+01 res=[1.180880e-01 9.404501e-02 5.191953e-02 5.199196e-02 3.005340e-01 ] ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /users/stoneszone/hpMusic_opt on a linux-gnu-c-opt named n361 with 1 processor, by stoneszone Thu Jun 25 05:32:05 2015 Using Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012 Max Max/Min Avg Total Time (sec): 4.248e+01 1.00000 4.248e+01 Objects: 3.800e+01 1.00000 3.800e+01 Flops: 2.193e+10 1.00000 2.193e+10 2.193e+10 Flops/sec: 5.163e+08 1.00000 5.163e+08 5.163e+08 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 4.000e+01 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 4.2476e+01 100.0% 2.1929e+10 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 3.900e+01 97.5% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 11 1.0 6.9140e-01 1.0 1.27e+09 1.0 0.0e+00 0.0e+00 0.0e+00 2 6 0 0 0 2 6 0 0 0 1836 MatSolve 11 1.0 7.3690e-01 1.0 1.27e+09 1.0 0.0e+00 0.0e+00 0.0e+00 2 6 0 0 0 2 6 0 0 0 1723 MatLUFactorNum 1 1.0 8.3215e+00 1.0 1.94e+10 1.0 0.0e+00 0.0e+00 0.0e+00 20 88 0 0 0 20 88 0 0 0 2328 MatILUFactorSym 1 1.0 2.5170e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 1 0 0 0 2 1 0 0 0 3 0 MatAssemblyBegin 3 1.0 1.0252e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 3 1.0 1.0735e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1 1.0 4.3768e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 1 0 0 0 10 1 0 0 0 10 0 MatGetOrdering 1 1.0 1.0769e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 10 0 0 0 0 10 0 MatIncreaseOvrlp 1 1.0 1.6262e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 2 0 0 0 0 3 0 MatView 3 1.0 7.8917e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecMDot 10 1.0 2.4071e-03 1.0 7.13e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2961 VecNorm 12 1.0 7.3576e-04 1.0 1.56e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2114 VecScale 11 1.0 4.4227e-04 1.0 7.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1612 VecCopy 3 1.0 2.0671e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 45 1.0 5.9671e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 2 1.0 1.5497e-04 1.0 2.59e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1673 VecMAXPY 11 1.0 2.8293e-03 1.0 8.42e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2977 VecScatterBegin 22 1.0 2.2347e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 11 1.0 1.1444e-03 1.0 2.14e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1869 KSPGMRESOrthog 10 1.0 4.8258e-03 1.0 1.43e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2954 KSPSetUp 2 1.0 1.1759e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 12 0 0 0 0 13 0 KSPSolve 1 1.0 1.0554e+01 1.0 2.18e+10 1.0 0.0e+00 0.0e+00 3.2e+01 25 99 0 0 80 25 99 0 0 82 2067 PCSetUp 2 1.0 9.1756e+00 1.0 1.94e+10 1.0 0.0e+00 0.0e+00 1.7e+01 22 88 0 0 42 22 88 0 0 44 2111 PCSetUpOnBlocks 1 1.0 8.5743e+00 1.0 1.94e+10 1.0 0.0e+00 0.0e+00 5.0e+00 20 88 0 0 12 20 88 0 0 13 2259 PCApply 11 1.0 7.4065e-01 1.0 1.27e+09 1.0 0.0e+00 0.0e+00 0.0e+00 2 6 0 0 0 2 6 0 0 0 1714 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 3 0 0 0 Vector 21 2 1039664 0 Vector Scatter 1 0 0 0 Krylov Solver 2 0 0 0 Preconditioner 2 0 0 0 Viewer 1 0 0 0 Index Set 8 4 54800 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 #PETSc Option Table entries: -ksp_atol 1e-50 -ksp_gmres_restart 30 -ksp_lgmres_augment 10 -ksp_max_it 100 -ksp_monitor -ksp_pc_side right -ksp_rtol 1e-5 -ksp_type gmres -ksp_view -log_summary -pc_type asm -sub_pc_factor_fill 1.9 -sub_pc_factor_levels 0 -sub_pc_type ilu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Fri Sep 21 15:34:01 2012 Configure options: --download-f2cblaslapack=1 --download-mpicc=1 --download-mpich=1 --with-debugging=0 --with-cc=gcc --with-cxx=g++ --with-fc=0 --with-x=0 ----------------------------------------- Libraries compiled on Fri Sep 21 15:34:01 2012 on 3165CLinux1 Machine characteristics: Linux-3.2.0-3-amd64-x86_64-with-debian-wheezy-sid Using PETSc directory: /home/czhou/usr/petsc-3.3-p3-opt Using PETSc arch: linux-gnu-c-opt ----------------------------------------- Using C compiler: /home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} ----------------------------------------- Using include paths: -I/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/include -I/home/czhou/usr/petsc-3.3-p3-opt/include -I/home/czhou/usr/petsc-3.3-p3-opt/include -I/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/include ----------------------------------------- Using C linker: /home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/bin/mpicc Using libraries: -Wl,-rpath,/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -L/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -lpetsc -lpthread -Wl,-rpath,/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -L/home/czhou/usr/petsc-3.3-p3-opt/linux-gnu-c-opt/lib -lf2clapack -lf2cblas -lm -lm -ldl -----------------------------------------