[petsc-users] log_view for the master branch

Kong, Fande fande.kong at inl.gov
Wed May 3 13:24:29 CDT 2017


Hi,

I am using the current master branch. The log_view gives me the summary as
follows, and the "WARNING" box repeats three times. Are we intending to do
so?

Thanks,

Fande,


************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r
-fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary:
----------------------------------------------



      ##########################################################
      #                                                        #
      #                          WARNING!!!                    #
      #                                                        #
      #   This code was compiled with a debugging option,      #
      #   To get timing results run ./configure                #
      #   using --with-debugging=no, the performance will      #
      #   be generally two or three times faster.              #
      #                                                        #
      ##########################################################


./ex29 on a arch-darwin-c-debug-master named FN604208 with 1 processor, by
kongf Wed May  3 12:28:23 2017
Using Petsc Development GIT revision: v3.7.6-3529-g76c7fe0  GIT Date:
2017-05-03 08:46:23 -0500

                         Max       Max/Min        Avg      Total
Time (sec):           1.350e-02      1.00000   1.350e-02
Objects:              4.100e+01      1.00000   4.100e+01
Flop:                 3.040e+02      1.00000   3.040e+02  3.040e+02
Flop/sec:            2.251e+04      1.00000   2.251e+04  2.251e+04
Memory:               1.576e+05      1.00000              1.576e+05
MPI Messages:         0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Message Lengths:  0.000e+00      0.00000   0.000e+00  0.000e+00
MPI Reductions:       0.000e+00      0.00000

Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N
--> 2N flop
                            and VecAXPY() for complex vectors of length N
--> 8N flop

Summary of Stages:   ----- Time ------  ----- Flop -----  --- Messages ---
-- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts
%Total     Avg         %Total   counts   %Total
 0:      Main Stage: 1.3483e-02  99.8%  3.0400e+02 100.0%  0.000e+00
0.0%  0.000e+00        0.0%  0.000e+00   0.0%

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on
interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flop: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and
PetscLogStagePop().
      %T - percent time in this phase         %F - percent flop in this
phase
      %M - percent messages in this phase     %L - percent message lengths
in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over
all processors)
------------------------------------------------------------------------------------------------------------------------


      ##########################################################
      #                                                        #
      #                          WARNING!!!                    #
      #                                                        #
      #   This code was compiled with a debugging option,      #
      #   To get timing results run ./configure                #
      #   using --with-debugging=no, the performance will      #
      #   be generally two or three times faster.              #
      #                                                        #
      ##########################################################


Event                Count      Time (sec)
Flop                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len
Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

KSPGMRESOrthog         1 1.0 1.3617e-04 1.0 3.50e+01 1.0 0.0e+00 0.0e+00
0.0e+00  1 12  0  0  0   1 12  0  0  0     0
KSPSetUp               1 1.0 4.1097e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  3  0  0  0  0   3  0  0  0  0     0
KSPSolve               1 1.0 1.4596e-03 1.0 2.85e+02 1.0 0.0e+00 0.0e+00
0.0e+00 11 94  0  0  0  11 94  0  0  0     0
VecMDot                1 1.0 1.7958e-05 1.0 1.70e+01 1.0 0.0e+00 0.0e+00
0.0e+00  0  6  0  0  0   0  6  0  0  0     1
VecNorm                2 1.0 1.9152e-05 1.0 3.40e+01 1.0 0.0e+00 0.0e+00
0.0e+00  0 11  0  0  0   0 11  0  0  0     2
VecScale               1 1.0 4.4771e-05 1.0 9.00e+00 1.0 0.0e+00 0.0e+00
0.0e+00  0  3  0  0  0   0  3  0  0  0     0
VecCopy                1 1.0 1.2218e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                10 1.0 7.3789e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  1  0  0  0  0   1  0  0  0  0     0
VecAXPY                1 1.0 6.3397e-05 1.0 1.80e+01 1.0 0.0e+00 0.0e+00
0.0e+00  0  6  0  0  0   0  6  0  0  0     0
VecMAXPY               2 1.0 4.8989e-05 1.0 3.60e+01 1.0 0.0e+00 0.0e+00
0.0e+00  0 12  0  0  0   0 12  0  0  0     1
VecAssemblyBegin       2 1.0 7.5148e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyEnd         2 1.0 7.5093e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize           2 1.0 9.5865e-05 1.0 4.30e+01 1.0 0.0e+00 0.0e+00
0.0e+00  1 14  0  0  0   1 14  0  0  0     0
MatMult                1 1.0 1.3781e-05 1.0 5.70e+01 1.0 0.0e+00 0.0e+00
0.0e+00  0 19  0  0  0   0 19  0  0  0     4
MatSolve               2 1.0 7.4019e-04 1.0 1.14e+02 1.0 0.0e+00 0.0e+00
0.0e+00  5 38  0  0  0   5 38  0  0  0     0
MatLUFactorNum         1 1.0 2.8001e-05 1.0 1.90e+01 1.0 0.0e+00 0.0e+00
0.0e+00  0  6  0  0  0   0  6  0  0  0     1
MatILUFactorSym        1 1.0 9.1556e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  1  0  0  0  0   1  0  0  0  0     0
MatAssemblyBegin       2 1.0 7.7938e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd         2 1.0 4.5131e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetRowIJ            1 1.0 4.0429e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 1.7907e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00  1  0  0  0  0   1  0  0  0  0     0
PCSetUp                1 1.0 5.8597e-04 1.0 1.90e+01 1.0 0.0e+00 0.0e+00
0.0e+00  4  6  0  0  0   4  6  0  0  0     0
PCApply                2 1.0 7.8497e-04 1.0 1.14e+02 1.0 0.0e+00 0.0e+00
0.0e+00  6 38  0  0  0   6 38  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

       Krylov Solver     1              1        18408     0.
     DMKSP interface     1              1          648     0.
              Vector    12             12        19224     0.
      Vector Scatter     2              2         1312     0.
              Matrix     2              2         7380     0.
    Distributed Mesh     3              3        14960     0.
           Index Set     7              7         5632     0.
   IS L to G Mapping     2              2         1368     0.
Star Forest Bipartite Graph     6              6         4864     0.
     Discrete System     3              3         2596     0.
      Preconditioner     1              1         1000     0.
              Viewer     1              0            0     0.
========================================================================================================================
Average time to get PetscTime(): 4.50294e-08
#PETSc Option Table entries:
-log_view
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-hypre=1 --with-ssl=0 --with-debugging=yes
--with-pic=1 --with-shared-libraries=1 --with-cc=mpicc --with-cxx=mpicxx
--with-fc=mpif90 --download-fblaslapack=1 --download-metis=1
--download-parmetis=1 --download-superlu_dist=1 --download-scalapack=1
--download-mumps=1 CC=mpicc CXX=mpicxx FC=mpif90 F77=mpif77 F90=mpif90
CFLAGS="-fPIC -fopenmp" CXXFLAGS="-fPIC -fopenmp" FFLAGS="-fPIC -fopenmp"
FCFLAGS="-fPIC -fopenmp" F90FLAGS="-fPIC -fopenmp" F77FLAGS="-fPIC
-fopenmp" PETSC_ARCH=arch-darwin-c-debug-master
-----------------------------------------
Libraries compiled on Wed May  3 11:04:44 2017 on FN604208
Machine characteristics: Darwin-15.5.0-x86_64-i386-64bit
Using PETSc directory: /Users/kongf/projects/petsc
Using PETSc arch: arch-darwin-c-debug-master
-----------------------------------------

Using C compiler: mpicc -fPIC -fopenmp   -g3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -fopenmp  -g   ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------

Using include paths:
-I/Users/kongf/projects/petsc/arch-darwin-c-debug-master/include
-I/Users/kongf/projects/petsc/include -I/Users/kongf/projects/petsc/include
-I/Users/kongf/projects/petsc/arch-darwin-c-debug-master/include
-I/opt/X11/include
-----------------------------------------

Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries:
-Wl,-rpath,/Users/kongf/projects/petsc/arch-darwin-c-debug-master/lib
-L/Users/kongf/projects/petsc/arch-darwin-c-debug-master/lib -lpetsc
-Wl,-rpath,/Users/kongf/projects/petsc/arch-darwin-c-debug-master/lib
-L/Users/kongf/projects/petsc/arch-darwin-c-debug-master/lib
-Wl,-rpath,/opt/X11/lib -L/opt/X11/lib
-Wl,-rpath,/opt/moose/mpich/mpich-3.2/clang-opt/lib
-L/opt/moose/mpich/mpich-3.2/clang-opt/lib
-Wl,-rpath,/opt/moose/llvm-3.9.0/lib -L/opt/moose/llvm-3.9.0/lib
-Wl,-rpath,/opt/moose/llvm-3.9.0/lib/clang/3.9.0/lib/darwin
-L/opt/moose/llvm-3.9.0/lib/clang/3.9.0/lib/darwin
-Wl,-rpath,/opt/moose/gcc-6.2.0/lib/gcc/x86_64-apple-darwin15.6.0/6.2.0
-L/opt/moose/gcc-6.2.0/lib/gcc/x86_64-apple-darwin15.6.0/6.2.0
-Wl,-rpath,/opt/moose/gcc-6.2.0/lib -L/opt/moose/gcc-6.2.0/lib
-Wl,-rpath,/opt/moose/llvm-3.9.0/bin/../lib/clang/3.9.0/lib/darwin
-L/opt/moose/llvm-3.9.0/bin/../lib/clang/3.9.0/lib/darwin -lsuperlu_dist
-lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord
-lscalapack -lflapack -lfblas -lparmetis -lmetis -lX11 -lclang_rt.osx
-lmpifort -lgfortran -lgomp -lgcc_ext.10.5 -lquadmath -lm -lclang_rt.osx
-lmpicxx -lc++ -lclang_rt.osx -ldl -lmpi -lpmpi -lomp -lSystem
-lclang_rt.osx -ldl
-----------------------------------------



      ##########################################################
      #                                                        #
      #                          WARNING!!!                    #
      #                                                        #
      #   This code was compiled with a debugging option,      #
      #   To get timing results run ./configure                #
      #   using --with-debugging=no, the performance will      #
      #   be generally two or three times faster.              #
      #                                                        #
      ##########################################################
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20170503/4d7ba592/attachment.html>


More information about the petsc-users mailing list