F I N I T E E L E M E N T A N A L Y S I S P R O G R A M FEAP (C) Regents of the University of California All Rights Reserved. VERSION: Release 8.4.1d DATE: 01 January 2014 Files are set as: Status Filename Input (read ) : Exists Icube05_0001 Output (write) : Exists Ocube05_0001 Restart (read ) : New Rcube05_0001 Restart (write) : New Rcube05_0001 Plots (write) : New Pcube05_0001 Caution, existing write files will be overwritten. Are filenames correct?( y or n; r = redefine all, s = stop) : R U N N I N G F E A P P R O B L E M N O W --> Please report errors by e-mail to: feap@ce.berkeley.edu 0 KSP Residual norm 9.439652885243e+00 1 KSP Residual norm 3.403063701552e+00 2 KSP Residual norm 1.525697956373e+00 3 KSP Residual norm 8.144627985555e-01 4 KSP Residual norm 4.758378188059e-01 5 KSP Residual norm 3.236515587469e-01 6 KSP Residual norm 2.418029669033e-01 7 KSP Residual norm 1.827549847375e-01 8 KSP Residual norm 1.398550438409e-01 9 KSP Residual norm 1.030953264508e-01 10 KSP Residual norm 7.958349903481e-02 11 KSP Residual norm 6.681382994268e-02 12 KSP Residual norm 5.378646980721e-02 13 KSP Residual norm 4.220237149543e-02 14 KSP Residual norm 3.303837304177e-02 15 KSP Residual norm 2.557132558799e-02 16 KSP Residual norm 2.093525127922e-02 17 KSP Residual norm 1.861266291328e-02 18 KSP Residual norm 1.634213327176e-02 19 KSP Residual norm 1.437070315206e-02 20 KSP Residual norm 1.178709971799e-02 21 KSP Residual norm 9.474395684521e-03 22 KSP Residual norm 8.369069535618e-03 23 KSP Residual norm 7.620223784944e-03 24 KSP Residual norm 6.389080851037e-03 25 KSP Residual norm 5.276196499513e-03 26 KSP Residual norm 4.335046465707e-03 27 KSP Residual norm 3.750611304400e-03 28 KSP Residual norm 3.259185417950e-03 29 KSP Residual norm 2.808249246864e-03 30 KSP Residual norm 2.457833154269e-03 31 KSP Residual norm 2.186664726288e-03 32 KSP Residual norm 1.937136024635e-03 33 KSP Residual norm 1.721976232711e-03 34 KSP Residual norm 1.464158213424e-03 35 KSP Residual norm 1.298585967033e-03 36 KSP Residual norm 1.169575240801e-03 37 KSP Residual norm 1.026696505198e-03 38 KSP Residual norm 9.079709939875e-04 39 KSP Residual norm 8.166319180322e-04 40 KSP Residual norm 7.470600885896e-04 41 KSP Residual norm 6.892567054179e-04 42 KSP Residual norm 6.484617749679e-04 43 KSP Residual norm 6.230476455450e-04 44 KSP Residual norm 5.738402788145e-04 45 KSP Residual norm 5.310135793286e-04 46 KSP Residual norm 5.154535519258e-04 47 KSP Residual norm 4.968032977484e-04 48 KSP Residual norm 4.815325364403e-04 49 KSP Residual norm 4.941996597984e-04 50 KSP Residual norm 5.108562130931e-04 51 KSP Residual norm 4.967589544601e-04 52 KSP Residual norm 4.404771417974e-04 53 KSP Residual norm 3.687217055358e-04 54 KSP Residual norm 3.291983649119e-04 55 KSP Residual norm 3.160232303069e-04 56 KSP Residual norm 3.180990021471e-04 57 KSP Residual norm 3.123014542585e-04 58 KSP Residual norm 2.762730799751e-04 59 KSP Residual norm 2.533996612888e-04 60 KSP Residual norm 2.552796306358e-04 61 KSP Residual norm 2.446220670313e-04 62 KSP Residual norm 2.374604473303e-04 63 KSP Residual norm 2.487770499691e-04 64 KSP Residual norm 2.640767533535e-04 65 KSP Residual norm 2.685518541083e-04 66 KSP Residual norm 2.532234554451e-04 67 KSP Residual norm 2.402945288708e-04 68 KSP Residual norm 2.496944822345e-04 69 KSP Residual norm 2.457193897427e-04 70 KSP Residual norm 2.396016294170e-04 71 KSP Residual norm 2.329005574671e-04 72 KSP Residual norm 2.198894936707e-04 73 KSP Residual norm 2.076044633981e-04 74 KSP Residual norm 1.894470548366e-04 75 KSP Residual norm 1.718685881501e-04 76 KSP Residual norm 1.649582839248e-04 77 KSP Residual norm 1.548846805516e-04 78 KSP Residual norm 1.491415006284e-04 79 KSP Residual norm 1.432842248388e-04 80 KSP Residual norm 1.278723801757e-04 81 KSP Residual norm 1.210019730358e-04 82 KSP Residual norm 1.193546400751e-04 83 KSP Residual norm 1.070800581329e-04 84 KSP Residual norm 9.891584858107e-05 85 KSP Residual norm 9.164811979338e-05 86 KSP Residual norm 8.219453597620e-05 87 KSP Residual norm 8.790336149611e-05 88 KSP Residual norm 8.972092711178e-05 89 KSP Residual norm 7.956662183674e-05 90 KSP Residual norm 7.031519599408e-05 91 KSP Residual norm 6.867457523419e-05 92 KSP Residual norm 7.057082052376e-05 93 KSP Residual norm 7.656308003079e-05 94 KSP Residual norm 8.412920366041e-05 95 KSP Residual norm 8.728112735952e-05 96 KSP Residual norm 8.679380006468e-05 97 KSP Residual norm 8.686623853160e-05 98 KSP Residual norm 8.564689141295e-05 99 KSP Residual norm 8.488699583538e-05 100 KSP Residual norm 8.784253263422e-05 101 KSP Residual norm 9.318290997923e-05 102 KSP Residual norm 9.859438663623e-05 103 KSP Residual norm 9.557682227753e-05 104 KSP Residual norm 8.740767899822e-05 105 KSP Residual norm 8.753169141707e-05 106 KSP Residual norm 9.012906867091e-05 107 KSP Residual norm 8.778052938333e-05 108 KSP Residual norm 8.048897278494e-05 109 KSP Residual norm 7.423277897729e-05 110 KSP Residual norm 6.949183991542e-05 111 KSP Residual norm 6.400520795492e-05 112 KSP Residual norm 5.910963869778e-05 113 KSP Residual norm 5.454040394158e-05 114 KSP Residual norm 5.307561515375e-05 115 KSP Residual norm 5.108361024731e-05 116 KSP Residual norm 4.335409816047e-05 117 KSP Residual norm 3.532285015095e-05 118 KSP Residual norm 2.907740987193e-05 119 KSP Residual norm 2.525730414691e-05 120 KSP Residual norm 2.282597799944e-05 121 KSP Residual norm 2.017666182896e-05 122 KSP Residual norm 1.832657506324e-05 123 KSP Residual norm 1.590360811289e-05 124 KSP Residual norm 1.367463252073e-05 125 KSP Residual norm 1.228239405851e-05 126 KSP Residual norm 1.103928815708e-05 127 KSP Residual norm 1.043097194230e-05 128 KSP Residual norm 1.063297402543e-05 129 KSP Residual norm 9.548264233107e-06 130 KSP Residual norm 7.530007938858e-06 131 KSP Residual norm 6.635748529850e-06 132 KSP Residual norm 6.042896503832e-06 133 KSP Residual norm 5.317042895626e-06 134 KSP Residual norm 4.576206689725e-06 135 KSP Residual norm 3.835388568043e-06 136 KSP Residual norm 3.476327976895e-06 137 KSP Residual norm 3.089941536749e-06 138 KSP Residual norm 2.821925579519e-06 139 KSP Residual norm 2.401777674792e-06 140 KSP Residual norm 2.000950008887e-06 141 KSP Residual norm 1.899805926138e-06 142 KSP Residual norm 1.729710561334e-06 143 KSP Residual norm 1.563152467684e-06 144 KSP Residual norm 1.400868931093e-06 145 KSP Residual norm 1.182247052515e-06 146 KSP Residual norm 9.562555781434e-07 147 KSP Residual norm 8.387168604331e-07 148 KSP Residual norm 7.574911286983e-07 149 KSP Residual norm 6.644732390374e-07 150 KSP Residual norm 6.048672599528e-07 151 KSP Residual norm 5.742627661824e-07 152 KSP Residual norm 5.618826890585e-07 153 KSP Residual norm 4.825028996454e-07 154 KSP Residual norm 3.890683833800e-07 155 KSP Residual norm 3.356201907013e-07 156 KSP Residual norm 3.041841083158e-07 157 KSP Residual norm 2.829106510821e-07 158 KSP Residual norm 2.561543398841e-07 159 KSP Residual norm 2.141807062981e-07 160 KSP Residual norm 1.803934120720e-07 161 KSP Residual norm 1.518454814447e-07 162 KSP Residual norm 1.325124208721e-07 163 KSP Residual norm 1.289716447714e-07 164 KSP Residual norm 1.125336132154e-07 165 KSP Residual norm 9.405816292251e-08 KSP Object: 24 MPI processes type: cg maximum iterations=10000, initial guess is zero tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 24 MPI processes type: gamg MG: type is MULTIPLICATIVE, levels=4 cycles=v Cycles per PCApply=1 Using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 24 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 24 MPI processes type: bjacobi block Jacobi: number of blocks = 24 Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (mg_coarse_sub_) 1 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_sub_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 2.04016 Factored matrix follows: Matrix Object: 1 MPI processes type: seqaij rows=450, cols=450, bs=6 package used to perform factorization: petsc total: nonzeros=111564, allocated nonzeros=111564 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 140 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 1 MPI processes type: seqaij rows=450, cols=450, bs=6 total: nonzeros=54684, allocated nonzeros=54684 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 150 nodes, limit used is 5 linear system matrix = precond matrix: Matrix Object: 24 MPI processes type: mpiaij rows=450, cols=450, bs=6 total: nonzeros=54684, allocated nonzeros=54684 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 150 nodes, limit used is 5 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 24 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 24 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 24 MPI processes type: mpiaij rows=4602, cols=4602, bs=6 total: nonzeros=291924, allocated nonzeros=291924 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 86 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 24 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 24 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 24 MPI processes type: mpiaij rows=145944, cols=145944, bs=6 total: nonzeros=19588968, allocated nonzeros=19588968 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 2215 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 24 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 24 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: 24 MPI processes type: mpiaij rows=2080983, cols=2080983, bs=3 total: nonzeros=132617061, allocated nonzeros=132617061 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 28852 nodes, limit used is 5 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Matrix Object: 24 MPI processes type: mpiaij rows=2080983, cols=2080983, bs=3 total: nonzeros=132617061, allocated nonzeros=132617061 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 28852 nodes, limit used is 5 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /usr2/tgross/parFEAP/compile_test/parFEAP84_mod/FEAP84/ver84/parfeap/feap on a linux-gnu-c named sandy.ilsb.tuwien.ac.at with 24 processors, by tgross Tue Jan 28 20:43:30 2014 Using Petsc Release Version 3.4.3, Oct, 15, 2013 Max Max/Min Avg Total Time (sec): 7.253e+01 1.00004 7.253e+01 Objects: 1.446e+03 1.00139 1.444e+03 Flops: 2.199e+10 1.13802 2.079e+10 4.989e+11 Flops/sec: 3.032e+08 1.13799 2.866e+08 6.879e+09 MPI Messages: 3.182e+04 4.96327 1.739e+04 4.174e+05 MPI Message Lengths: 3.596e+07 4.36215 1.309e+03 5.463e+08 MPI Reductions: 1.699e+03 1.00118 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 7.2532e+01 100.0% 4.9894e+11 100.0% 4.174e+05 100.0% 1.309e+03 100.0% 1.696e+03 99.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %f - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage MatMult 693 1.0 1.2462e+01 1.5 4.28e+09 1.1 9.2e+04 1.8e+03 0.0e+00 14 19 22 30 0 14 19 22 30 0 7760 MatMultAdd 498 1.0 1.6722e+00 1.6 6.61e+08 1.1 4.7e+04 3.8e+02 0.0e+00 2 3 11 3 0 2 3 11 3 0 9145 MatMultTranspose 498 1.0 6.3413e+00 4.3 6.61e+08 1.1 4.7e+04 3.8e+02 0.0e+00 4 3 11 3 0 4 3 11 3 0 2412 MatSolve 166 0.0 2.7465e-02 0.0 3.70e+07 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1346 MatSOR 996 1.0 4.5279e+01 1.2 1.55e+10 1.1 2.0e+05 1.4e+03 0.0e+00 57 71 48 52 0 57 71 48 52 0 7786 MatLUFactorSym 1 1.0 3.1369e-03453.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1 1.0 9.8681e-033183.8 1.77e+07 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1792 MatScale 9 1.0 4.1182e-02 1.4 5.34e+06 1.1 4.0e+02 4.0e+02 0.0e+00 0 0 0 0 0 0 0 0 0 0 2979 MatAssemblyBegin 50 1.0 9.3928e-01 9.8 0.00e+00 0.0 7.4e+02 2.7e+03 5.4e+01 1 0 0 0 3 1 0 0 0 3 0 MatAssemblyEnd 50 1.0 1.6156e-01 1.1 0.00e+00 0.0 5.3e+03 1.4e+02 1.7e+02 0 0 1 0 10 0 0 1 0 10 0 MatGetRow 344796 1.0 6.6728e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 0.0 5.5075e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 0.0 1.6499e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.7e-01 0 0 0 0 0 0 0 0 0 0 0 MatCoarsen 3 1.0 8.7690e-03 1.2 0.00e+00 0.0 2.5e+03 6.0e+02 6.3e+01 0 0 1 0 4 0 0 1 0 4 0 MatZeroEntries 1 1.0 6.0659e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 7 1.4 4.6086e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 3 1.0 2.0056e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMatMult 3 1.0 6.1421e-01 1.0 7.97e+07 1.1 2.4e+03 3.7e+03 7.2e+01 1 0 1 2 4 1 0 1 2 4 2957 MatMatMultSym 3 1.0 4.0360e-01 1.0 0.00e+00 0.0 2.0e+03 2.7e+03 6.6e+01 1 0 0 1 4 1 0 0 1 4 0 MatMatMultNum 3 1.0 2.1095e-01 1.0 7.97e+07 1.1 4.0e+02 8.5e+03 6.0e+00 0 0 0 1 0 0 0 0 1 0 8611 MatPtAP 3 1.0 2.5774e+00 1.0 5.06e+08 1.2 4.0e+03 7.8e+03 7.5e+01 4 2 1 6 4 4 2 1 6 4 4372 MatPtAPSymbolic 3 1.0 1.6182e+00 1.0 0.00e+00 0.0 2.4e+03 9.5e+03 4.5e+01 2 0 1 4 3 2 0 1 4 3 0 MatPtAPNumeric 3 1.0 9.5925e-01 1.0 5.06e+08 1.2 1.6e+03 5.1e+03 3.0e+01 1 2 0 1 2 1 2 0 1 2 11746 MatTrnMatMult 3 1.0 5.4231e-01 1.0 3.21e+07 1.2 2.5e+03 4.1e+03 8.7e+01 1 0 1 2 5 1 0 1 2 5 1304 MatGetLocalMat 15 1.0 9.1004e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 1 0 0 0 0 1 0 MatGetBrAoCol 9 1.0 2.5301e-02 4.5 0.00e+00 0.0 2.8e+03 9.9e+03 1.2e+01 0 0 1 5 1 0 0 1 5 1 0 MatGetSymTrans 6 1.0 2.3214e-02 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecDot 1 1.0 5.1212e-04 1.0 1.76e+05 1.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 8127 VecMDot 30 1.0 1.2961e-01 8.9 1.04e+07 1.0 0.0e+00 0.0e+00 3.0e+01 0 0 0 0 2 0 0 0 0 2 1894 VecTDot 330 1.0 1.6917e+00 3.2 5.80e+07 1.0 0.0e+00 0.0e+00 3.3e+02 1 0 0 0 19 1 0 0 0 19 812 VecNorm 199 1.0 5.4512e+00 4.2 3.13e+07 1.0 0.0e+00 0.0e+00 2.0e+02 4 0 0 0 12 4 0 0 0 12 136 VecScale 1527 1.0 7.9088e-03 2.3 3.39e+06 2.2 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7568 VecCopy 5 1.0 2.5609e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 2045 1.0 3.0009e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 333 1.0 1.9860e-01 1.5 5.82e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6938 VecAYPX 662 1.0 1.0522e-01 1.7 4.46e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 10008 VecMAXPY 33 1.0 2.5293e-02 1.6 1.23e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11469 VecAssemblyBegin 125 1.0 3.1981e-02 5.4 0.00e+00 0.0 0.0e+00 0.0e+00 3.7e+02 0 0 0 0 22 0 0 0 0 22 0 VecAssemblyEnd 125 1.0 7.4625e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 33 1.0 5.3189e-03 2.6 1.04e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4615 VecScatterBegin 3310 1.0 1.3461e-01 2.9 0.00e+00 0.0 4.1e+05 1.2e+03 0.0e+00 0 0 97 90 0 0 0 97 90 0 0 VecScatterEnd 3310 1.0 1.7887e+01 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 15 0 0 0 0 15 0 0 0 0 0 VecSetRandom 3 1.0 2.0881e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 33 1.0 1.4960e-02 3.0 3.12e+06 1.0 0.0e+00 0.0e+00 3.3e+01 0 0 0 0 2 0 0 0 0 2 4923 KSPGMRESOrthog 30 1.0 1.4513e-01 4.3 2.08e+07 1.0 0.0e+00 0.0e+00 3.0e+01 0 0 0 0 2 0 0 0 0 2 3383 KSPSetUp 10 1.0 8.2066e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 6.5226e+01 1.0 2.20e+10 1.1 4.2e+05 1.3e+03 1.7e+03 90100100100 98 90100100100 98 7649 PCSetUp 2 1.0 5.3663e+00 1.0 7.84e+08 1.2 3.6e+04 1.9e+03 1.2e+03 7 4 9 13 69 7 4 9 13 69 3279 PCSetUpOnBlocks 166 1.0 1.3266e-02113.3 1.77e+07 0.0 0.0e+00 0.0e+00 5.1e+00 0 0 0 0 0 0 0 0 0 0 1333 PCApply 166 1.0 5.2920e+01 1.1 1.91e+10 1.1 3.6e+05 1.1e+03 5.1e+00 70 87 86 75 0 70 87 86 75 0 8198 PCGAMGgraph_AGG 3 1.0 9.4316e-01 1.0 1.36e+06 1.1 2.0e+03 1.6e+02 1.1e+02 1 0 0 0 6 1 0 0 0 6 32 PCGAMGcoarse_AGG 3 1.0 5.6034e-01 1.0 3.21e+07 1.2 7.3e+03 1.9e+03 2.1e+02 1 0 2 2 12 1 0 2 2 12 1262 PCGAMGProl_AGG 3 1.0 1.0778e-01 1.0 0.00e+00 0.0 1.6e+04 6.3e+02 5.2e+02 0 0 4 2 31 0 0 4 2 31 0 PCGAMGPOpt_AGG 3 1.0 1.1700e+00 1.0 2.44e+08 1.1 6.4e+03 2.3e+03 1.7e+02 2 1 2 3 10 2 1 2 3 10 4764 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Matrix 87 87 207310700 0 Matrix Coarsen 3 3 1908 0 Vector 1249 1249 277682920 0 Vector Scatter 22 22 23672 0 Index Set 60 60 70344 0 Krylov Solver 10 10 116376 0 Preconditioner 10 10 9916 0 Viewer 2 1 736 0 PetscRandom 3 3 1896 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 1.05858e-05 Average time for zero size MPI_Send(): 1.08282e-06 #PETSc Option Table entries: -ksp_monitor -ksp_type cg -ksp_view -log_summary -mg_levels_ksp_type richardson -mg_levels_pc_type sor -options_left -pc_gamg_agg_nsmooths 1 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure run at: Tue Jan 28 19:33:43 2014 Configure options: --with-mpi-dir=/usr/local/openmpi/1.5.4/gcc/x86_64 --download-parmetis --download-superlu_dist --download-hypre --download-metis --download-cmake --download-spooles --download-f-blas-lapack=1 --with-debugging=0 --with-shared-libraries=0 COPTFLAGS=-O3 FOPTFLAGS=-O3 ----------------------------------------- Libraries compiled on Tue Jan 28 19:33:43 2014 on ilfb35.ilsb.tuwien.ac.at Machine characteristics: Linux-2.6.32-358.2.1.el6.x86_64-x86_64-with-redhat-6.4-Carbon Using PETSc directory: /usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3 Using PETSc arch: linux-gnu-c ----------------------------------------- Using C compiler: /usr/local/openmpi/1.5.4/gcc/x86_64/bin/mpicc -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O3 ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: /usr/local/openmpi/1.5.4/gcc/x86_64/bin/mpif90 -Wall -Wno-unused-variable -O3 ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/include -I/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/include -I/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/include -I/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/include -I/usr/local/openmpi/1.5.4/gcc/x86_64/include -I/usr/local/include ----------------------------------------- Using C linker: /usr/local/openmpi/1.5.4/gcc/x86_64/bin/mpicc Using Fortran linker: /usr/local/openmpi/1.5.4/gcc/x86_64/bin/mpif90 Using libraries: -Wl,-rpath,/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -L/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -lpetsc -Wl,-rpath,/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -L/usr2/tgross/parFEAP/compile_test/parFEAP84_mod/petsc-3.4.3/linux-gnu-c/lib -lHYPRE -L/usr/local/lib64 -L/usr/local/lib64/openmpi -L/usr/local/openmpi/1.5.4/gcc/x86_64/lib64 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -lmpi_cxx -lstdc++ -lsuperlu_dist_3.3 -lflapack -lfblas -lX11 -lparmetis -lmetis -lpthread -lmpi_f90 -lmpi_f77 -lgfortran -lm -lm -lm -lm -lmpi_cxx -lstdc++ -lmpi_cxx -lstdc++ -ldl -lmpi -lnsl -lutil -lgcc_s -lpthread -ldl ----------------------------------------- #PETSc Option Table entries: -ksp_monitor -ksp_type cg -ksp_view -log_summary -mg_levels_ksp_type richardson -mg_levels_pc_type sor -options_left -pc_gamg_agg_nsmooths 1 -pc_gamg_type agg -pc_type gamg #End of PETSc Option Table entries There are no unused options.