Initialized at Mar 14 2016 19:20:19 (-0500) Compiled by GCC version 4.9.3 20150626 (Cray Inc.) Compiled with -cpp -I /u/sciteam/mrosso/libs/petsc/gnu/4.9/opt/include -I /u/sciteam/mrosso/libs/pfft/gnu/4.9/include -I /opt/cray/netcdf-hdf5parallel/4.3.3.1/GNU/4.9/include -I /opt/acml/5.3.1/gfortran64_fma4/include -I /opt/cray/fftw/3.3.4.6/interlagos/include -I /opt/cray/libsci/13.3.0/GNU/4.9/x86_64/include -I /opt/cray/mpt/7.3.0/gni/mpich-gnu/4.9/include -I /opt/cray/hdf5-parallel/1.8.14/GNU/4.9/include -I /opt/cray/rca/1.0.0-2.0502.60530.1.63.gem/include -I /opt/cray/alps/5.2.4-2.0502.9774.31.12.gem/include -I /opt/cray/xpmem/0.1-2.0502.64982.5.3.gem/include -I /opt/cray/gni-headers/4.0-1.0502.10859.7.8.gem/include -I /opt/cray/pmi/5.0.10-1.0000.11050.179.3.gem/include -I /opt/cray/ugni/6.0-1.0502.10863.8.28.gem/include -I /opt/cray/udreg/2.3.2-1.0502.10518.2.17.gem/include -I /opt/cray/wlm_detect/1.0-1.0502.64649.2.2.gem/include -I /opt/cray/krca/1.0.0-2.0502.63139.4.30.gem/include -I /opt/cray-hss-devel/7.2.0/include -D __CRAYXE -D __CRAY_INTERLAGOS -D __CRAYXT_COMPUTE_LINUX_TARGET -D __TARGET_LINUX__ -D GNU -D NDEBUG -march=bdver1 -auxbase-strip CMakeFiles/base.dir/sys.f90.o -O3 -J ../../include ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /u/sciteam/mrosso/main-repo/bin/homo_iso_turb.exe €@ on a gnu-opt-32idx named ÿÿÿÿ with 4096 processors, by mrosso Mon Mar 14 19:32:38 2016 Using Petsc Development GIT revision: pre-tsfc-467-g2061b0d GIT Date: 2016-03-02 13:39:03 -0600 Max Max/Min Avg Total Time (sec): 7.394e+02 1.00000 7.394e+02 Objects: 7.334e+04 1.00000 7.334e+04 Flops: 5.096e+10 1.00000 5.096e+10 2.087e+14 Flops/sec: 6.892e+07 1.00000 6.892e+07 2.823e+11 Memory: 4.855e+08 1.00049 1.988e+12 MPI Messages: 1.319e+06 1.00311 1.315e+06 5.385e+09 MPI Message Lengths: 2.841e+09 1.00001 2.161e+03 1.164e+13 MPI Reductions: 6.900e+05 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.2686e+02 17.2% 4.2162e+13 20.2% 2.253e+08 4.2% 3.414e+02 15.8% 9.660e+04 14.0% 1: MG Apply: 6.1247e+02 82.8% 1.6657e+14 79.8% 5.160e+09 95.8% 1.819e+03 84.2% 5.934e+05 86.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run ./configure # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSidedF 2050 1.0 1.3880e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecMDot 8104 1.0 1.6651e+01 1.3 2.39e+09 1.0 0.0e+00 0.0e+00 1.6e+04 2 5 0 0 2 11 23 0 0 17 587113 VecNorm 10153 1.0 3.1413e+00 1.1 6.65e+08 1.0 0.0e+00 0.0e+00 2.0e+04 0 1 0 0 3 2 6 0 0 21 867616 VecScale 9129 1.0 5.6840e+00 1.1 2.99e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 4 3 0 0 0 215564 VecCopy 3075 1.0 7.7475e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecSet 1052 1.0 1.8707e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 2049 1.0 1.4300e+00 1.1 1.34e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 1 0 0 0 384644 VecMAXPY 9129 1.0 5.0541e+00 1.1 2.92e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 6 0 0 0 4 28 0 0 0 2364690 VecAssemblyBegin 2050 1.0 1.1886e+01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 4.1e+03 1 0 0 0 1 6 0 0 0 4 0 VecAssemblyEnd 2050 1.0 5.8412e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 9133 1.0 1.5021e+00 1.3 0.00e+00 0.0 2.2e+08 8.2e+03 0.0e+00 0 0 4 16 0 1 0100100 0 0 VecScatterEnd 9133 1.0 7.7536e+00 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 5 0 0 0 0 0 VecNormalize 9129 1.0 8.4006e+00 1.0 8.97e+08 1.0 0.0e+00 0.0e+00 1.8e+04 1 2 0 0 3 7 9 0 0 19 437564 MatMult 9128 1.0 2.9188e+01 1.2 3.89e+09 1.0 2.2e+08 8.2e+03 0.0e+00 4 8 4 16 0 22 38100100 0 545655 MatMultTranspose 5 1.0 1.7991e-03 1.5 7.49e+04 1.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 170515 MatAssemblyBegin 57 1.0 1.2566e-01 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 4.8e+01 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 57 1.0 1.4516e-01 1.0 0.00e+00 0.0 2.9e+05 4.6e+02 2.5e+02 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 5 1.0 2.6226e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 9 1.3 7.6559e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatPtAP 5 1.0 1.2340e+00 1.0 1.16e+06 1.0 6.1e+05 9.3e+02 2.1e+02 0 0 0 0 0 1 0 0 0 0 3838 MatPtAPSymbolic 5 1.0 5.8230e-01 1.0 0.00e+00 0.0 3.7e+05 1.5e+03 7.5e+01 0 0 0 0 0 0 0 0 0 0 0 MatPtAPNumeric 5 1.0 6.6996e-01 1.0 1.16e+06 1.0 2.5e+05 1.4e+02 1.4e+02 0 0 0 0 0 1 0 0 0 0 7069 MatGetLocalMat 5 1.0 7.8831e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetBrAoCol 5 1.0 2.8542e-02 1.3 0.00e+00 0.0 3.7e+05 1.5e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 DMCoarsen 5 1.0 1.0339e-01 1.0 0.00e+00 0.0 2.5e+05 1.4e+02 2.6e+02 0 0 0 0 0 0 0 0 0 0 0 DMCreateInterpolation 5 1.0 1.4532e-01 1.0 5.62e+05 1.0 0.0e+00 0.0e+00 2.4e+02 0 0 0 0 0 0 0 0 0 0 15832 KSPGMRESOrthog 8104 1.0 2.3121e+01 1.2 4.77e+09 1.0 0.0e+00 0.0e+00 5.3e+04 3 9 0 0 8 16 46 0 0 54 845630 KSPSetUp 1031 1.0 6.0117e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.5e+01 0 0 0 0 0 0 0 0 0 0 0 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer KSPSolve 1025 1.0 6.7814e+02 1.0 5.10e+10 1.0 5.4e+09 2.2e+03 6.8e+05 92100100100 99 5354952390633707 307803 PCSetUp 1 1.0 1.5329e+00 1.0 1.72e+06 1.0 8.6e+05 7.0e+02 8.3e+02 0 0 0 0 0 1 0 0 0 1 4590 Warning -- total time of even greater than time of entire stage -- something is wrong with the timer PCApply 9129 1.0 6.1273e+02 1.0 4.07e+10 1.0 5.2e+09 1.9e+03 5.9e+05 83 80 96 84 86 4833952290533614 271849 MGSetup Level 0 1 1.0 3.3870e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 MGSetup Level 1 1 1.0 3.8071e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 MGSetup Level 2 1 1.0 6.4771e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 MGSetup Level 3 1 1.0 1.8530e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 MGSetup Level 4 1 1.0 2.0361e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 MGSetup Level 5 1 1.0 3.2160e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01 0 0 0 0 0 0 0 0 0 0 0 --- Event Stage 1: MG Apply VecTDot 54774 1.0 2.5064e+01 1.0 5.48e+04 1.0 0.0e+00 0.0e+00 1.1e+05 3 0 0 0 16 4 0 0 0 18 9 VecScale 109548 1.0 3.6226e+01 1.1 5.59e+07 1.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 6 0 0 0 0 6317 VecCopy 27387 1.0 2.1344e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 182580 1.0 3.2494e+00 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecAXPY 91290 1.0 3.9962e+01 1.1 2.39e+09 1.0 0.0e+00 0.0e+00 0.0e+00 5 5 0 0 0 6 6 0 0 0 245298 VecAYPX 91290 1.0 8.4470e+00 1.3 1.24e+09 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 3 0 0 0 600951 VecScatterBegin 301257 1.0 1.1558e+01 1.2 0.00e+00 0.0 5.2e+09 1.9e+03 0.0e+00 1 0 96 84 0 2 0100100 0 0 VecScatterEnd 301257 1.0 4.0061e+01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0 MatMult 100419 1.0 1.2176e+02 1.2 1.61e+10 1.0 2.5e+09 3.2e+03 0.0e+00 15 32 46 68 0 19 40 48 81 0 541988 MatMultAdd 45645 1.0 7.4836e+00 1.1 6.84e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 2 0 0 0 374224 MatMultTranspose 45645 1.0 1.4949e+01 1.1 6.84e+08 1.0 0.0e+00 0.0e+00 4.6e+04 2 1 0 0 7 2 2 0 0 8 187337 MatSolve 27387 1.0 3.1890e-01 1.1 2.74e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 352 MatSOR 109548 1.0 2.5379e+02 1.1 1.96e+10 1.0 2.7e+09 6.8e+02 0.0e+00 33 38 50 16 0 40 48 52 19 0 315612 MatCholFctrNum 1 1.0 5.5790e-05 2.7 1.00e+00 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 73 MatICCFactorSym 1 1.0 1.2994e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatResidual 45645 1.0 5.2969e+01 1.0 4.79e+09 1.0 1.1e+09 2.2e+03 4.6e+04 7 9 21 21 7 8 12 22 25 8 370097 MatGetRowIJ 1 1.0 3.6955e-05 7.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 2.1100e-04 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetUp 1 1.0 2.4080e-05 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 100419 1.0 5.0819e+02 1.0 3.45e+10 1.0 4.0e+09 1.8e+03 4.5e+05 69 68 75 63 65 83 85 78 75 75 278176 PCSetUp 1 1.0 6.5303e-04 1.3 1.00e+00 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6 PCSetUpOnBlocks 9129 1.0 8.5886e-02 1.6 1.00e+00 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 63903 1.0 1.6109e+02 1.1 1.63e+10 1.0 0.0e+00 0.0e+00 0.0e+00 21 32 0 0 0 25 40 0 0 0 414535 MGSmooth Level 0 9129 1.0 6.9213e+01 1.0 5.84e+05 1.0 6.7e+08 8.0e+00 2.7e+05 9 0 12 0 40 11 0 13 0 46 35 MGSmooth Level 1 18258 1.0 2.2853e+01 1.0 5.19e+06 1.0 6.7e+08 3.2e+01 0.0e+00 3 0 12 0 0 4 0 13 0 0 929 MGResid Level 1 9129 1.0 3.4600e+00 1.1 1.02e+06 1.0 2.2e+08 3.2e+01 9.1e+03 0 0 4 0 1 1 0 4 0 2 1210 MGInterp Level 1 18258 1.0 2.7285e+00 1.0 2.92e+05 1.0 0.0e+00 0.0e+00 9.1e+03 0 0 0 0 1 0 0 0 0 2 439 MGSmooth Level 2 18258 1.0 2.5405e+01 1.0 4.32e+07 1.0 6.7e+08 1.3e+02 0.0e+00 3 0 12 1 0 4 0 13 1 0 6971 MGResid Level 2 9129 1.0 3.1630e+00 1.1 8.18e+06 1.0 2.2e+08 1.3e+02 9.1e+03 0 0 4 0 1 1 0 4 0 2 10592 MGInterp Level 2 18258 1.0 2.8550e+00 1.0 2.34e+06 1.0 0.0e+00 0.0e+00 9.1e+03 0 0 0 0 1 0 0 0 0 2 3353 MGSmooth Level 3 18258 1.0 2.7593e+01 1.0 3.53e+08 1.0 6.7e+08 5.1e+02 0.0e+00 4 1 12 3 0 4 1 13 4 0 52384 MGResid Level 3 9129 1.0 4.0408e+00 1.1 6.54e+07 1.0 2.2e+08 5.1e+02 9.1e+03 1 0 4 1 1 1 0 4 1 2 66331 MGInterp Level 3 18258 1.0 2.9908e+00 1.0 1.87e+07 1.0 0.0e+00 0.0e+00 9.1e+03 0 0 0 0 1 0 0 0 0 2 25605 MGSmooth Level 4 18258 1.0 4.7024e+01 1.1 2.85e+09 1.0 6.7e+08 2.0e+03 0.0e+00 6 6 12 12 0 7 7 13 14 0 248350 MGResid Level 4 9129 1.0 7.0269e+00 1.2 5.23e+08 1.0 2.2e+08 2.0e+03 9.1e+03 1 1 4 4 1 1 1 4 5 2 305147 MGInterp Level 4 18258 1.0 4.2085e+00 1.1 1.50e+08 1.0 0.0e+00 0.0e+00 9.1e+03 1 0 0 0 1 1 0 0 0 2 145571 MGSmooth Level 5 18258 1.0 3.3786e+02 1.0 3.13e+10 1.0 6.7e+08 8.2e+03 1.7e+05 46 61 12 47 25 55 77 13 56 29 378974 MGResid Level 5 9129 1.0 3.6265e+01 1.0 4.19e+09 1.0 2.2e+08 8.2e+03 9.1e+03 5 8 4 16 1 6 10 4 19 2 473016 MGInterp Level 5 18258 1.0 1.1675e+01 1.1 1.20e+09 1.0 0.0e+00 0.0e+00 9.1e+03 1 2 0 0 1 2 3 0 0 2 419779 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 3 2 1664 0. Vector 122 40 963968 0. Vector Scatter 17 0 0 0. Matrix 68 0 0 0. Matrix Null Space 2 0 0 0. Distributed Mesh 6 0 0 0. Star Forest Bipartite Graph 12 0 0 0. Discrete System 6 0 0 0. Index Set 34 34 241700 0. IS L to G Mapping 6 0 0 0. Krylov Solver 9 1 1160 0. DMKSP interface 5 0 0 0. Preconditioner 9 1 1000 0. --- Event Stage 1: MG Apply Vector 73032 73032 803936256 0. Matrix 1 0 0 0. Index Set 3 1 776 0. ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 5.18322e-05 Average time for zero size MPI_Send(): 4.31738e-06 #PETSc Option Table entries: -enable_postproc -fp_nout 8 -ksp_converged_reason -ksp_initial_guess_nonzero yes -ksp_norm_type unpreconditioned -ksp_type gmres -log_view -lx 1.0 -ly 1.0 -lz 1.0 -mg_coarse_ksp_constant_null_space -mg_coarse_ksp_max_it 3 -mg_coarse_ksp_type cg -mg_coarse_pc_type bjacobi -mg_coarse_sub_pc_type icc -mg_levels_ksp_type richardson -mg_levels_pc_type sor -nout 16 -ntot_pnt 128000 -nx 512 -ny 512 -nz 512 -options_left -pc_mg_cycle_type v -pc_mg_galerkin -pc_mg_levels 6 -pc_mg_log -pc_type mg -ppe_max_iter 40 -px 16 -py 16 -pz 16 -Re_lambda 75 -tend 1.0001 -write_data 1 #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --known-level1-dcache-size=16384 --known-level1-dcache-linesize=64 --known-level1-dcache-assoc=4 --known-sizeof-char=1 --known-sizeof-void-p=8 --known-sizeof-short=2 --known-sizeof-int=4 --known-sizeof-long=8 --known-sizeof-long-long=8 --known-sizeof-float=4 --known-sizeof-double=8 --known-sizeof-size_t=8 --known-bits-per-byte=8 --known-memcmp-ok=1 --known-sizeof-MPI_Comm=4 --known-sizeof-MPI_Fint=4 --known-mpi-long-double=1 --known-mpi-int64_t=1 --known-mpi-c-double-complex=1 --known-sdot-returns-double=0 --known-snrm2-returns-double=0 --known-has-attribute-aligned=1 --prefix=/u/sciteam/mrosso/libs/petsc/gnu/4.9/opt --with-batch="1 " --known-mpi-shared="0 " --known-mpi-shared-libraries=0 --known-memcmp-ok --with-blas-lapack-lib=/opt/acml/5.3.1/gfortran64/lib/libacml.a --COPTFLAGS="-march=bdver1 -O3 -ffast-math -fPIC" --FOPTFLAGS="-march=bdver1 -O3 -ffast-math -fPIC" --CXXOPTFLAGS="-march=bdver1 -O3 -ffast-math -fPIC" --with-x="0 " --with-debugging=O --with-clib-autodetect="0 " --with-cxxlib-autodetect="0 " --with-fortranlib-autodetect="0 " --with-shared-libraries="0 " --with-mpi-compilers="1 " --with-cc="cc " --with-cxx="CC " --with-fc="ftn " --download-hypre=1 --download-blacs="1 " --download-scalapack="1 " --download-superlu_dist="1 " --download-metis="1 " --download-parmetis="1 " PETSC_ARCH=gnu-opt-32idx ----------------------------------------- Libraries compiled on Fri Mar 4 14:44:30 2016 on h2ologin1 Machine characteristics: Linux-3.0.101-0.46-default-x86_64-with-SuSE-11-x86_64 Using PETSc directory: /mnt/a/u/sciteam/mrosso/libs/petsc/repo Using PETSc arch: gnu-opt-32idx ----------------------------------------- Using C compiler: cc -march=bdver1 -O3 -ffast-math -fPIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: ftn -march=bdver1 -O3 -ffast-math -fPIC ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/mnt/a/u/sciteam/mrosso/libs/petsc/repo/gnu-opt-32idx/include -I/mnt/a/u/sciteam/mrosso/libs/petsc/repo/include -I/mnt/a/u/sciteam/mrosso/libs/petsc/repo/include -I/mnt/a/u/sciteam/mrosso/libs/petsc/repo/gnu-opt-32idx/include -I/u/sciteam/mrosso/libs/petsc/gnu/4.9/opt/include ----------------------------------------- Using C linker: cc Using Fortran linker: ftn Using libraries: -Wl,-rpath,/mnt/a/u/sciteam/mrosso/libs/petsc/repo/gnu-opt-32idx/lib -L/mnt/a/u/sciteam/mrosso/libs/petsc/repo/gnu-opt-32idx/lib -lpetsc -Wl,-rpath,/u/sciteam/mrosso/libs/petsc/gnu/4.9/opt/lib -L/u/sciteam/mrosso/libs/petsc/gnu/4.9/opt/lib -lsuperlu_dist -lparmetis -lmetis -lHYPRE -lscalapack -Wl,-rpath,/opt/acml/5.3.1/gfortran64/lib -L/opt/acml/5.3.1/gfortran64/lib -lacml -lssl -lcrypto -ldl ----------------------------------------- #PETSc Option Table entries: -enable_postproc -fp_nout 8 -ksp_converged_reason -ksp_initial_guess_nonzero yes -ksp_norm_type unpreconditioned -ksp_type gmres -log_view -lx 1.0 -ly 1.0 -lz 1.0 -mg_coarse_ksp_constant_null_space -mg_coarse_ksp_max_it 3 -mg_coarse_ksp_type cg -mg_coarse_pc_type bjacobi -mg_coarse_sub_pc_type icc -mg_levels_ksp_type richardson -mg_levels_pc_type sor -nout 16 -ntot_pnt 128000 -nx 512 -ny 512 -nz 512 -options_left -pc_mg_cycle_type v -pc_mg_galerkin -pc_mg_levels 6 -pc_mg_log -pc_type mg -ppe_max_iter 40 -px 16 -py 16 -pz 16 -Re_lambda 75 -tend 1.0001 -write_data 1 #End of PETSc Option Table entries There is one unused database option. It is: Option left: name:-enable_postproc (no value)