nproc: 1 Number of buses: 1081 Number of branches: 1689 Number of swing buses: 1 Number of PQ buses: 793 Number of PV buses: 287 Number of generators: 288 Number of switches: 4 Initialization time: 0.00420284 Alloc main data time: 0.00474405 Read input data time: 0.0186689 Ext2int_gen time: 0.000257969 SUPERLU_DIST LU: 0-the LU numfactorization 1-the LU numfactorization 1.2.------solvingAXB for prefy11 time: 0.221233 SUPERLU_DIST LU: 0-the LU numfactorization 1-the LU numfactorization 1.2.------solvingAXB for fy11 time: 0.235818 SUPERLU_DIST LU: 0-the LU numfactorization 1-the LU numfactorization 1.2.------solvingAXB for posfy11 time: 0.235606 Build admittance matrix time: 0.752876 Scattering prefy11 time: 0.220714 timestepping solver context time: 0.000854015 Set initial conditions time: 0.000779867 TSSolve time: 7.72182 20 steps, ftime 0.100000 Solve nonlinear system time: 7.72193 Total prefy11 time: 7.94576 Scattering fy11 time: 0.225642 9 steps, ftime 0.050000 Total fy11 time: 3.96512 Scattering posfy11 time: 0.227567 TSSolve posfy11 time: 300.278 571 steps, ftime 2.855000 Total posfy11 time: 300.511 Total simu time: 312.455 Run simulation time: 312.455 Total time: 313.238 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./dynSim on a arch-opt named olympus.local with 1 processor, by lixi729 Mon Jun 29 09:52:32 2015 Using Petsc Release Version 3.5.3, unknown Max Max/Min Avg Total Time (sec): 3.132e+02 1.00000 3.132e+02 Objects: 3.944e+03 1.00000 3.944e+03 Flops: 2.049e+10 1.00000 2.049e+10 2.049e+10 Flops/sec: 6.540e+07 1.00000 6.540e+07 6.540e+07 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 0.000e+00 0.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 3.9622e+01 12.6% 2.0487e+10 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 1: IFunction: 1.4287e+02 45.6% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 2: IJacobian: 1.2995e+02 41.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 3: IJacobian assemble: 7.8466e-01 0.3% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # The code for various complex numbers numerical # # kernels uses C++, which generally is not well # # optimized. For performance that is about 4-5 times # # faster, specify --with-fortran-kernels=1 # # when running ./configure.py. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecView 1 1.0 1.9431e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 1737 1.0 1.9783e-02 1.0 1.60e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 809 VecMDot 4442 1.0 7.6915e-02 1.0 7.48e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 972 VecNorm 10253 1.0 1.0031e-01 1.0 9.45e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 942 VecScale 6779 1.0 3.2381e-02 1.0 3.12e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 965 VecCopy 7011 1.0 4.5350e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 12110 1.0 6.2799e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 4074 1.0 3.5270e-02 1.0 3.75e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1065 VecAXPBYCZ 2337 1.0 4.2562e-02 1.0 3.23e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 759 VecWAXPY 1737 1.0 1.1221e-02 1.0 8.00e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 713 VecMAXPY 6179 1.0 1.1616e-01 1.0 1.16e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 996 VecAssemblyBegin 40 1.0 3.3140e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 40 1.0 1.8120e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 7048 1.0 5.9822e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 3474 1.0 3.4505e-02 1.0 3.20e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 927 VecReduceComm 1737 1.0 2.1770e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 6179 1.0 9.9756e-02 1.0 8.54e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 856 MatMult 6179 1.0 9.7171e+00 1.0 8.29e+09 1.0 0.0e+00 0.0e+00 0.0e+00 3 40 0 0 0 25 40 0 0 0 853 MatSolve 6179 1.0 1.0770e+01 1.0 8.29e+09 1.0 0.0e+00 0.0e+00 0.0e+00 3 40 0 0 0 27 40 0 0 0 769 MatLUFactorSym 3 1.0 2.2101e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1743 1.0 1.6980e+01 1.0 3.47e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 43 17 0 0 0 204 MatILUFactorSym 3 1.0 1.2303e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCopy 3 1.0 2.9027e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 14 1.0 1.2286e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 1 1.0 1.6928e-05 1.0 1.15e+03 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 68 MatAssemblyBegin 63 1.0 2.8348e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 63 1.0 9.8379e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRow 276433 1.0 4.9249e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 MatGetRowIJ 6 1.0 1.8907e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 6 1.0 2.5551e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 3 1.0 5.9760e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTranspose 2 1.0 8.3423e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 9 1.0 2.6370e-02 1.0 2.05e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 78 MatMatSolve 3 1.0 2.0022e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 MatMatMultSym 9 1.0 2.2122e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMultNum 9 1.0 4.1728e-03 1.0 2.05e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 491 SFSetGraph 2 1.0 1.0967e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFReduceBegin 2 1.0 3.2687e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFReduceEnd 2 1.0 2.0266e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer TSStep 600 1.0 3.1174e+02 1.0 2.05e+10 1.0 0.0e+00 0.0e+00 0.0e+00100100 0 0 0 787100 0 0 0 66 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer TSFunctionEval 2937 1.0 1.4288e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 46 0 0 0 0 361 0 0 0 0 0 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer TSJacobianEval 1737 1.0 1.3074e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 42 0 0 0 0 330 0 0 0 0 0 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer SNESSolve 600 1.0 2.8290e+02 1.0 2.05e+10 1.0 0.0e+00 0.0e+00 0.0e+00 90100 0 0 0 714100 0 0 0 72 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer SNESFunctionEval 2337 1.0 1.1414e+02 1.0 3.23e+07 1.0 0.0e+00 0.0e+00 0.0e+00 36 0 0 0 0 288 0 0 0 0 0 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer SNESJacobianEval 1737 1.0 1.3075e+02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 42 0 0 0 0 330 0 0 0 0 0 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer SNESLineSearch 1737 1.0 8.8083e+01 1.0 2.46e+09 1.0 0.0e+00 0.0e+00 0.0e+00 28 12 0 0 0 222 12 0 0 0 28 KSPGMRESOrthog 4442 1.0 1.6581e-01 1.0 1.50e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 902 KSPSetUp 3474 1.0 5.3911e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1737 1.0 3.5144e+01 1.0 1.80e+10 1.0 0.0e+00 0.0e+00 0.0e+00 11 88 0 0 0 89 88 0 0 0 512 PCSetUp 3474 1.0 1.6662e+01 1.0 3.47e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 42 17 0 0 0 208 PCSetUpOnBlocks 1737 1.0 1.6657e+01 1.0 3.47e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 17 0 0 0 42 17 0 0 0 208 PCApply 6179 1.0 1.0938e+01 1.0 8.29e+09 1.0 0.0e+00 0.0e+00 0.0e+00 3 40 0 0 0 28 40 0 0 0 758 --- Event Stage 1: IFunction VecScatterBegin 5874 1.0 3.0368e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 --- Event Stage 2: IJacobian VecScatterBegin 1737 1.0 6.1793e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 --- Event Stage 3: IJacobian assemble VecSet 1 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyBegin 1737 1.0 4.8026e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 6 0 0 0 0 0 MatAssemblyEnd 1737 1.0 7.2429e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 92 0 0 0 0 0 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 1896 129 1257520 0 Vector Scatter 901 897 577668 0 Matrix 133 115 47914600 0 Index Set 944 938 785528 0 IS L to G Mapping 1 0 0 0 Star Forest Bipartite Graph 16 14 11312 0 Distributed Mesh 7 6 26640 0 Discrete System 7 6 4752 0 TSAdapt 3 1 1208 0 TS 3 1 1256 0 DMTS 3 2 1424 0 SNES 3 1 1340 0 SNESLineSearch 3 1 872 0 DMSNES 4 3 2016 0 Krylov Solver 6 2 35912 0 DMKSP interface 1 0 0 0 Preconditioner 6 2 1912 0 Viewer 2 1 744 0 --- Event Stage 1: IFunction --- Event Stage 2: IJacobian --- Event Stage 3: IJacobian assemble Vector 2 1 1568 0 Vector Scatter 1 0 0 0 Index Set 2 2 1568 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 #PETSc Option Table entries: -i data/d288gen.txt -log_summary -ts_theta_endpoint #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4 Configure options: --with-cc=gcc --with-fc=gfortran --with-cxx=g++ --with-scalar-type=complex --with-clanguage=C++ --with-cxx-dialect=C++11 --download-mpich --download-superlu_dist --download-mumps --download-scalapack --download-parmetis --download-metis --download-elemental --with-debugging=no ----------------------------------------- Libraries compiled on Fri Jun 26 15:41:43 2015 on olympus.local Machine characteristics: Linux-2.6.32-131.17.1.el6.x86_64-x86_64-with-redhat-5.7-Tikanga Using PETSc directory: /people/lixi729/petsc Using PETSc arch: arch-opt ----------------------------------------- Using C compiler: /people/lixi729/petsc/arch-opt/bin/mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -fPIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /people/lixi729/petsc/arch-opt/bin/mpif90 -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/people/lixi729/petsc/arch-opt/include -I/people/lixi729/petsc/include -I/people/lixi729/petsc/include -I/people/lixi729/petsc/arch-opt/include ----------------------------------------- Using C linker: /people/lixi729/petsc/arch-opt/bin/mpicxx Using Fortran linker: /people/lixi729/petsc/arch-opt/bin/mpif90 Using libraries: -Wl,-rpath,/people/lixi729/petsc/arch-opt/lib -L/people/lixi729/petsc/arch-opt/lib -lpetsc -Wl,-rpath,/people/lixi729/petsc/arch-opt/lib -L/people/lixi729/petsc/arch-opt/lib -lelemental -lpmrrr -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist_3.3 -llapack -lblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -lm -Wl,-rpath,/pic/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/pic/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/pic/apps/gcc/4.8.2/lib/gcc -L/pic/apps/gcc/4.8.2/lib/gcc -Wl,-rpath,/pic/apps/gcc/4.8.2/lib64 -L/pic/apps/gcc/4.8.2/lib64 -Wl,-rpath,/pic/apps/gcc/4.8.2/lib -L/pic/apps/gcc/4.8.2/lib -lmpichf90 -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpichcxx -lstdc++ -Wl,-rpath,/people/lixi729/petsc/arch-opt/lib -L/people/lixi729/petsc/arch-opt/lib -Wl,-rpath,/pic/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -L/pic/apps/gcc/4.8.2/lib/gcc/x86_64-unknown-linux-gnu/4.8.2 -Wl,-rpath,/pic/apps/gcc/4.8.2/lib/gcc -L/pic/apps/gcc/4.8.2/lib/gcc -Wl,-rpath,/pic/apps/gcc/4.8.2/lib64 -L/pic/apps/gcc/4.8.2/lib64 -Wl,-rpath,/pic/apps/gcc/4.8.2/lib -L/pic/apps/gcc/4.8.2/lib -ldl -Wl,-rpath,/people/lixi729/petsc/arch-opt/lib -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl -----------------------------------------