Passing options to PETSc: -ksp_monitor -ksp_view -log_summary Solving linear system of size 112724 x 112724 (PETSc Krylov solver). 0 KSP Residual norm 5.017503570069e+02 1 KSP Residual norm 3.737157859305e+01 2 KSP Residual norm 2.979324808435e+01 3 KSP Residual norm 7.076390589707e+00 4 KSP Residual norm 6.651019208389e+00 5 KSP Residual norm 3.040640197522e+00 6 KSP Residual norm 2.936453097103e+00 7 KSP Residual norm 1.443287107783e+00 8 KSP Residual norm 1.417561465252e+00 9 KSP Residual norm 7.533915944325e-01 10 KSP Residual norm 6.623728338048e-01 11 KSP Residual norm 4.561495922812e-01 12 KSP Residual norm 3.555846865824e-01 13 KSP Residual norm 2.964732303563e-01 14 KSP Residual norm 1.930070052823e-01 15 KSP Residual norm 1.792585450624e-01 16 KSP Residual norm 1.143903220357e-01 17 KSP Residual norm 1.048967460021e-01 18 KSP Residual norm 6.301418914100e-02 19 KSP Residual norm 6.250520956617e-02 20 KSP Residual norm 4.106922874156e-02 21 KSP Residual norm 4.104646869635e-02 22 KSP Residual norm 2.664758594872e-02 23 KSP Residual norm 2.597907401397e-02 24 KSP Residual norm 1.655514236429e-02 25 KSP Residual norm 1.579419759534e-02 26 KSP Residual norm 1.043065949981e-02 27 KSP Residual norm 9.768856836029e-03 28 KSP Residual norm 7.006258580611e-03 29 KSP Residual norm 6.103060333635e-03 30 KSP Residual norm 4.620147218783e-03 31 KSP Residual norm 3.513987400913e-03 32 KSP Residual norm 2.772766351999e-03 33 KSP Residual norm 1.891637515779e-03 34 KSP Residual norm 1.565364643042e-03 35 KSP Residual norm 9.159139436359e-04 36 KSP Residual norm 8.825693602071e-04 37 KSP Residual norm 4.792409993813e-04 KSP Object: 1 MPI processes type: minres maximum iterations=10000, initial guess is zero tolerances: relative=1e-06, absolute=1e-15, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 1 MPI processes type: hypre HYPRE BoomerAMG preconditioning HYPRE BoomerAMG: Cycle type V HYPRE BoomerAMG: Maximum number of levels 25 HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1 HYPRE BoomerAMG: Convergence tolerance PER hypre call 0 HYPRE BoomerAMG: Threshold for strong coupling 0.25 HYPRE BoomerAMG: Interpolation truncation factor 0 HYPRE BoomerAMG: Interpolation: max elements per row 0 HYPRE BoomerAMG: Number of levels of aggressive coarsening 0 HYPRE BoomerAMG: Number of paths for aggressive coarsening 1 HYPRE BoomerAMG: Maximum row sums 0.9 HYPRE BoomerAMG: Sweeps down 1 HYPRE BoomerAMG: Sweeps up 1 HYPRE BoomerAMG: Sweeps on coarse 1 HYPRE BoomerAMG: Relax down symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax up symmetric-SOR/Jacobi HYPRE BoomerAMG: Relax on coarse Gaussian-elimination HYPRE BoomerAMG: Relax weight (all) 1 HYPRE BoomerAMG: Outer relax weight (all) 1 HYPRE BoomerAMG: Using CF-relaxation HYPRE BoomerAMG: Measure type local HYPRE BoomerAMG: Coarsen type Falgout HYPRE BoomerAMG: Interpolation type classical linear system matrix followed by preconditioner matrix: Matrix Object: 1 MPI processes type: seqaij rows=112724, cols=112724 total: nonzeros=10553536, allocated nonzeros=10553536 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 77769 nodes, limit used is 5 Matrix Object: 1 MPI processes type: seqaij rows=112724, cols=112724 total: nonzeros=10553536, allocated nonzeros=10553536 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 77769 nodes, limit used is 5 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- Unknown Name on a linux-gnu-c-opt named pacotaco-xps with 1 processor, by justin Tue Jun 2 14:17:00 2015 Using Petsc Release Version 3.4.2, Jul, 02, 2013 Max Max/Min Avg Total Time (sec): 1.743e+01 1.00000 1.743e+01 Objects: 5.000e+01 1.00000 5.000e+01 Flops: 8.646e+08 1.00000 8.646e+08 8.646e+08 Flops/sec: 4.960e+07 1.00000 4.960e+07 4.960e+07 MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00 MPI Reductions: 1.360e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.7433e+01 100.0% 8.6460e+08 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 1.350e+02 99.3% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 1 0 0 0 Index Set 6 6 4584 0 IS L to G Mapping 10 10 3162152 0 Vector 26 26 12664544 0 Vector Scatter 3 3 1932 0 Matrix 2 2 256897368 0 Preconditioner 1 1 1072 0 Krylov Solver 1 1 1160 0 ======================================================================================================================== Average time to get PetscTime(): 0 #PETSc Option Table entries: -ksp_monitor -ksp_view -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Tue Dec 17 23:10:14 2013 Configure options: --with-shared-libraries --with-debugging=0 --useThreads 0 --with-clanguage=C++ --with-c-support --with-fortran-interfaces=1 --with-mpi-dir=/usr/lib/openmpi --with-mpi-shared=1 --with-blas-lib=-lblas --with-lapack-lib=-llapack --with-blacs=1 --with-blacs-include=/usr/include --with-blacs-lib="[/usr/lib/libblacsCinit-openmpi.so,/usr/lib/libblacs-openmpi.so]" --with-scalapack=1 --with-scalapack-include=/usr/include --with-scalapack-lib=/usr/lib/libscalapack-openmpi.so --with-mumps=1 --with-mumps-include=/usr/include --with-mumps-lib="[/usr/lib/libdmumps.so,/usr/lib/libzmumps.so,/usr/lib/libsmumps.so,/usr/lib/libcmumps.so,/usr/lib/libmumps_common.so,/usr/lib/libpord.so]" --with-umfpack=1 --with-umfpack-include=/usr/include/suitesparse --with-umfpack-lib="[/usr/lib/libumfpack.so,/usr/lib/libamd.so]" --with-cholmod=1 --with-cholmod-include=/usr/include/suitesparse --with-cholmod-lib=/usr/lib/libcholmod.so --with-spooles=1 --with-spooles-include=/usr/include/spooles --with-spooles-lib=/usr/lib/libspooles.so --with-hypre=1 --with-hypre-dir=/usr --with-ptscotch=1 --with-ptscotch-include=/usr/include/scotch --with-ptscotch-lib="[/usr/lib/libptesmumps.so,/usr/lib/libptscotch.so,/usr/lib/libptscotcherr.so]" --with-fftw=1 --with-fftw-include=/usr/include --with-fftw-lib="[/usr/lib/x86_64-linux-gnu/libfftw3.so,/usr/lib/x86_64-linux-gnu/libfftw3_mpi.so]" --CXX_LINKER_FLAGS=-Wl,--no-as-needed ----------------------------------------- Libraries compiled on Tue Dec 17 23:10:14 2013 on lamiak Machine characteristics: Linux-3.2.0-37-generic-x86_64-with-Ubuntu-14.04-trusty Using PETSc directory: /build/buildd/petsc-3.4.2.dfsg1 Using PETSc arch: linux-gnu-c-opt ----------------------------------------- Using C compiler: mpicxx -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O -fPIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif90 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument -O ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/build/buildd/petsc-3.4.2.dfsg1/linux-gnu-c-opt/include -I/build/buildd/petsc-3.4.2.dfsg1/include -I/build/buildd/petsc-3.4.2.dfsg1/include -I/build/buildd/petsc-3.4.2.dfsg1/linux-gnu-c-opt/include -I/usr/include -I/usr/include/suitesparse -I/usr/include/scotch -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi ----------------------------------------- Using C linker: mpicxx Using Fortran linker: mpif90 Using libraries: -L/build/buildd/petsc-3.4.2.dfsg1/linux-gnu-c-opt/lib -L/build/buildd/petsc-3.4.2.dfsg1/linux-gnu-c-opt/lib -lpetsc -L/usr/lib -ldmumps -lzmumps -lsmumps -lcmumps -lmumps_common -lpord -lscalapack-openmpi -lHYPRE_utilities -lHYPRE_struct_mv -lHYPRE_struct_ls -lHYPRE_sstruct_mv -lHYPRE_sstruct_ls -lHYPRE_IJ_mv -lHYPRE_parcsr_ls -lcholmod -lumfpack -lamd -llapack -lblas -lX11 -lpthread -lptesmumps -lptscotch -lptscotcherr -L/usr/lib/x86_64-linux-gnu -lfftw3 -lfftw3_mpi -lm -L/usr/lib/openmpi/lib -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -L/lib/x86_64-linux-gnu -lmpi_f90 -lmpi_f77 -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -ldl -lmpi -lhwloc -lgcc_s -lpthread -ldl -----------------------------------------