Starting Xolotl Plasma-Surface Interactions Simulator Mon Dec 1 16:56:24 2014 PETScSolver Message: Master loaded network of size 5448. 0 TS dt 1e-12 time 0 1 TS dt 1e-11 time 1e-12 2 TS dt 1e-10 time 1.1e-11 3 TS dt 1e-09 time 1.11e-10 4 TS dt 9.04382e-09 time 1.111e-09 5 TS dt 2.26095e-08 time 3.37195e-09 6 TS dt 1.4131e-08 time 4.78505e-09 7 TS dt 3.53274e-08 time 8.31779e-09 8 TS dt 2.20796e-08 time 1.05258e-08 9 TS dt 5.51991e-08 time 1.60457e-08 10 TS dt 3.44994e-08 time 1.94956e-08 11 TS dt 2.15621e-08 time 2.16518e-08 12 TS dt 5.39054e-08 time 2.70424e-08 13 TS dt 3.36908e-08 time 3.04114e-08 14 TS dt 2.10568e-08 time 3.25171e-08 15 TS dt 5.26419e-08 time 3.77813e-08 16 TS dt 3.29012e-08 time 4.10714e-08 17 TS dt 2.05633e-08 time 4.31278e-08 18 TS dt 5.14082e-08 time 4.82686e-08 19 TS dt 3.21301e-08 time 5.14816e-08 20 TS dt 2.00813e-08 time 5.34897e-08 21 TS dt 5.02033e-08 time 5.851e-08 22 TS dt 3.1377e-08 time 6.16477e-08 23 TS dt 1.96107e-08 time 6.36088e-08 24 TS dt 4.90266e-08 time 6.85115e-08 25 TS dt 3.06416e-08 time 7.15756e-08 26 TS dt 1.9151e-08 time 7.34907e-08 27 TS dt 4.78776e-08 time 7.82785e-08 28 TS dt 2.99235e-08 time 8.12709e-08 29 TS dt 1.87022e-08 time 8.31411e-08 30 TS dt 4.67554e-08 time 8.78166e-08 31 TS dt 2.92222e-08 time 9.07388e-08 32 TS dt 7.30554e-08 time 9.80444e-08 33 TS dt 4.56596e-08 time 1.0261e-07 34 TS dt 2.85373e-08 time 1.05464e-07 35 TS dt 7.13431e-08 time 1.12598e-07 36 TS dt 4.45895e-08 time 1.17057e-07 37 TS dt 2.78684e-08 time 1.19844e-07 38 TS dt 6.9671e-08 time 1.26811e-07 39 TS dt 4.35444e-08 time 1.31166e-07 40 TS dt 2.72152e-08 time 1.33887e-07 41 TS dt 6.80381e-08 time 1.40691e-07 42 TS dt 4.25238e-08 time 1.44943e-07 43 TS dt 2.65774e-08 time 1.47601e-07 44 TS dt 6.64435e-08 time 1.54246e-07 45 TS dt 4.15272e-08 time 1.58398e-07 46 TS dt 2.59545e-08 time 1.60994e-07 47 TS dt 6.48862e-08 time 1.67482e-07 48 TS dt 4.05539e-08 time 1.71538e-07 49 TS dt 2.53462e-08 time 1.74072e-07 50 TS dt 6.33654e-08 time 1.80409e-07 51 TS dt 3.96034e-08 time 1.84369e-07 52 TS dt 2.47521e-08 time 1.86844e-07 53 TS dt 6.18803e-08 time 1.93032e-07 54 TS dt 3.86752e-08 time 1.969e-07 55 TS dt 2.4172e-08 time 1.99317e-07 56 TS dt 6.043e-08 time 2.0536e-07 57 TS dt 3.77687e-08 time 2.09137e-07 58 TS dt 2.36055e-08 time 2.11498e-07 59 TS dt 5.90137e-08 time 2.17399e-07 60 TS dt 3.68835e-08 time 2.21087e-07 61 TS dt 2.30522e-08 time 2.23393e-07 62 TS dt 5.76305e-08 time 2.29156e-07 63 TS dt 3.60191e-08 time 2.32757e-07 64 TS dt 2.25119e-08 time 2.35009e-07 65 TS dt 5.62798e-08 time 2.40637e-07 66 TS dt 3.51749e-08 time 2.44154e-07 67 TS dt 2.19843e-08 time 2.46353e-07 68 TS dt 5.49608e-08 time 2.51849e-07 69 TS dt 3.43505e-08 time 2.55284e-07 70 TS dt 2.1469e-08 time 2.57431e-07 71 TS dt 5.36726e-08 time 2.62798e-07 72 TS dt 3.35454e-08 time 2.66152e-07 73 TS dt 2.09659e-08 time 2.68249e-07 74 TS dt 5.24147e-08 time 2.7349e-07 75 TS dt 3.27592e-08 time 2.76766e-07 76 TS dt 2.04745e-08 time 2.78814e-07 77 TS dt 5.11862e-08 time 2.83932e-07 78 TS dt 3.19914e-08 time 2.87132e-07 79 TS dt 1.99946e-08 time 2.89131e-07 80 TS dt 4.99865e-08 time 2.9413e-07 81 TS dt 3.12416e-08 time 2.97254e-07 82 TS dt 7.81039e-08 time 3.05064e-07 83 TS dt 4.8815e-08 time 3.09946e-07 84 TS dt 3.05093e-08 time 3.12997e-07 85 TS dt 7.62734e-08 time 3.20624e-07 86 TS dt 4.76709e-08 time 3.25391e-07 87 TS dt 2.97943e-08 time 3.28371e-07 88 TS dt 7.44857e-08 time 3.35819e-07 89 TS dt 4.65536e-08 time 3.40474e-07 90 TS dt 2.9096e-08 time 3.43384e-07 91 TS dt 7.274e-08 time 3.50658e-07 92 TS dt 4.54625e-08 time 3.55204e-07 93 TS dt 2.8414e-08 time 3.58046e-07 94 TS dt 7.10351e-08 time 3.65149e-07 95 TS dt 4.43969e-08 time 3.69589e-07 96 TS dt 2.77481e-08 time 3.72364e-07 97 TS dt 6.93702e-08 time 3.79301e-07 98 TS dt 4.33564e-08 time 3.83636e-07 99 TS dt 2.70977e-08 time 3.86346e-07 100 TS dt 6.77444e-08 time 3.93121e-07 TS Object: 4 MPI processes type: arkimex maximum steps=100 maximum time=1000 total number of nonlinear solver iterations=1522 total number of nonlinear solve failures=158 total number of linear solver iterations=4871 total number of rejected steps=158 ARK IMEX 3 Stiff abscissa ct = 0.000000 0.871733 0.600000 1.000000 Stiffly accurate: yes Explicit first stage: yes FSAL property: yes Nonstiff abscissa c = 0.000000 0.871733 0.600000 1.000000 TSAdapt Object: 4 MPI processes type: basic number of candidates 1 Basic: clip fastest decrease 0.1, fastest increase 10 Basic: safety factor 0.9, extra factor after step rejection 0.5 SNES Object: 4 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=18 total number of function evaluations=6 SNESLineSearch Object: 4 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 4 MPI processes type: fgmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 4 MPI processes type: fieldsplit FieldSplit with MULTIPLICATIVE composition: total splits = 2, blocksize = 5448 Solver info for each split is in the following KSP objects: Split number 0 Defined by IS KSP Object: (fieldsplit_0_) 4 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_0_) 4 MPI processes type: redundant Redundant preconditioner: First (color=0) of 4 PCs follows KSP Object: (fieldsplit_0_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_0_redundant_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: nd factor fill ratio given 5, needed 4.25244 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=612, cols=612 package used to perform factorization: petsc total: nonzeros=25706, allocated nonzeros=25706 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=612, cols=612 total: nonzeros=6045, allocated nonzeros=6045 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: (fieldsplit_0_) 4 MPI processes type: mpiaij rows=612, cols=612 total: nonzeros=6045, allocated nonzeros=6045 total number of mallocs used during MatSetValues calls =0 not using I-node (on process 0) routines Split number 1 Defined by IS KSP Object: (fieldsplit_1_) 4 MPI processes type: chebyshev Chebyshev: eigenvalue estimates: min = 0.159164, max = 1.75081 Chebyshev: estimated using: [0 0.1; 0 1.1] KSP Object: (fieldsplit_1_est_) 4 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (fieldsplit_1_) 4 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: (fieldsplit_1_) 4 MPI processes type: mpiaij rows=277848, cols=277848, bs=5448 total: nonzeros=1.26648e+07, allocated nonzeros=1.26648e+07 total number of mallocs used during MatSetValues calls =0 maximum iterations=4, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: (fieldsplit_1_) 4 MPI processes type: jacobi linear system matrix = precond matrix: Mat Object: (fieldsplit_1_) 4 MPI processes type: mpiaij rows=277848, cols=277848, bs=5448 total: nonzeros=1.26648e+07, allocated nonzeros=1.26648e+07 total number of mallocs used during MatSetValues calls =0 linear system matrix = precond matrix: Mat Object: (fieldsplit_1_) 4 MPI processes type: mpiaij rows=277848, cols=277848, bs=5448 total: nonzeros=1.26648e+07, allocated nonzeros=1.26648e+07 total number of mallocs used during MatSetValues calls =0 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- fakeXolotlApplicationNameForPETSc on a arch-linux2-c-opt named lap98867.ornl.gov with 4 processors, by bqo Wed Dec 3 01:18:52 2014 Using Petsc Development GIT revision: v3.4.4-4229-ge52d2c6 GIT Date: 2014-05-19 23:25:26 -0500 Max Max/Min Avg Total Time (sec): 1.165e+05 1.00000 1.165e+05 Objects: 2.080e+02 1.00000 2.080e+02 Flops: 7.633e+13 1.08324 7.486e+13 2.995e+14 Flops/sec: 6.549e+08 1.08324 6.424e+08 2.569e+09 MPI Messages: 2.709e+07 1.70142 2.151e+07 8.604e+07 MPI Message Lengths: 8.370e+09 1.20224 3.582e+02 3.082e+10 MPI Reductions: 1.121e+07 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 1.1655e+05 100.0% 2.9946e+14 100.0% 8.604e+07 100.0% 3.582e+02 100.0% 1.121e+07 100.0% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecDot 1522 1.0 1.5504e+00 3.3 2.16e+08 1.1 0.0e+00 0.0e+00 1.5e+03 0 0 0 0 0 0 0 0 0 0 546 VecMDot 1601671 1.0 5.1964e+03 1.4 3.48e+12 1.1 0.0e+00 0.0e+00 1.6e+06 4 5 0 0 14 4 5 0 0 14 2628 VecNorm 9583823 1.0 1.0836e+04 3.6 1.36e+12 1.1 0.0e+00 0.0e+00 9.6e+06 5 2 0 0 86 5 2 0 0 86 491 VecScale 3243534 1.0 8.2290e+02 1.1 2.30e+11 1.1 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 1095 VecCopy 3177522 1.0 9.6126e+02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 6406301 1.0 5.6181e+02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAXPY 57955 1.0 2.2039e+01 1.0 8.21e+09 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1461 VecAYPX 9511766 1.0 2.5752e+03 1.3 7.86e+11 1.1 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 2 1 0 0 0 1197 VecAXPBYCZ 6341465 1.0 3.0333e+03 1.0 2.25e+12 1.1 0.0e+00 0.0e+00 0.0e+00 3 3 0 0 0 3 3 0 0 0 2904 VecWAXPY 54294 1.0 1.6508e+01 1.3 3.85e+09 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 914 VecMAXPY 1658763 1.0 5.3532e+03 1.1 3.71e+12 1.1 0.0e+00 0.0e+00 0.0e+00 5 5 0 0 0 5 5 0 0 0 2718 VecPointwiseMult 9527706 1.0 3.2155e+03 1.1 6.75e+11 1.1 0.0e+00 0.0e+00 0.0e+00 3 1 0 0 0 3 1 0 0 0 823 VecScatterBegin 20678637 1.0 1.8710e+03 1.5 0.00e+00 0.0 8.6e+07 3.6e+02 0.0e+00 1 0100100 0 1 0100100 0 0 VecScatterEnd 20678637 1.0 2.1821e+02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecReduceArith 3044 1.0 6.7048e-01 1.1 4.31e+08 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2523 VecReduceComm 1522 1.0 2.8435e-01 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.5e+03 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 18480 1.0 1.2432e+01 1.3 3.93e+09 1.1 0.0e+00 0.0e+00 1.8e+04 0 0 0 0 0 0 0 0 0 0 1239 MatMult 11165191 1.0 8.9953e+04 1.1 6.37e+13 1.1 6.7e+07 9.6e+01 0.0e+00 75 84 78 21 0 75 84 78 21 0 2780 MatSolve 1584871 1.0 2.4519e+02 1.0 8.05e+10 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1313 MatLUFactorSym 1 1.0 6.4945e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 1680 1.0 1.5105e+00 1.0 1.01e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2672 MatCopy 1679 1.0 3.8104e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 1 1.0 4.9591e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatScale 1680 1.0 2.0515e+01 1.1 5.42e+09 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1037 MatAssemblyBegin 10083 1.0 1.2891e+0234.6 0.00e+00 0.0 0.0e+00 0.0e+00 1.3e+04 0 0 0 0 0 0 0 0 0 0 0 MatAssemblyEnd 10083 1.0 2.1889e+01 1.1 0.00e+00 0.0 3.6e+01 2.6e+01 2.4e+01 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 3.6001e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 1680 1.0 8.0923e-01 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.4e+03 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrix 3360 1.0 4.9679e+02 1.0 0.00e+00 0.0 2.4e+01 2.6e+01 1.0e+04 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 2.6441e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 5038 1.0 2.1659e+01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 6 1.5 1.1222e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRedundant 1680 1.0 8.5461e-01 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 3.4e+03 0 0 0 0 0 0 0 0 0 0 0 MatMPIConcateSeq 1680 1.0 4.0188e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 TSStep 100 1.0 1.1654e+05 1.0 7.63e+13 1.1 8.6e+07 3.6e+02 1.1e+07100100100100100 100100100100100 2570 TSFunctionEval 2540 1.0 2.0650e+02 1.1 1.80e+08 1.1 1.5e+04 4.4e+04 0.0e+00 0 0 0 2 0 0 0 0 2 0 3 TSJacobianEval 1680 1.0 8.4778e+02 1.0 5.42e+09 1.1 1.0e+04 4.4e+04 6.7e+03 1 0 0 1 0 1 0 0 1 0 25 SNESSolve 459 1.0 1.1650e+05 1.0 7.63e+13 1.1 8.6e+07 3.6e+02 1.1e+07100100100100100 100100100100100 2570 SNESFunctionEval 1981 1.0 1.6164e+02 1.1 5.61e+08 1.1 1.2e+04 4.4e+04 0.0e+00 0 0 0 2 0 0 0 0 2 0 14 SNESJacobianEval 1680 1.0 8.4779e+02 1.0 5.42e+09 1.1 1.0e+04 4.4e+04 6.7e+03 1 0 0 1 0 1 0 0 1 0 25 SNESLineSearch 1522 1.0 1.4250e+02 1.0 1.16e+10 1.1 1.8e+04 2.2e+04 6.1e+03 0 0 0 1 0 0 0 0 1 0 318 KSPGMRESOrthog 1601671 1.0 9.8384e+03 1.1 6.96e+12 1.1 0.0e+00 0.0e+00 1.6e+06 8 9 0 0 14 8 9 0 0 14 2777 KSPSetUp 6721 1.0 2.0089e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1680 1.0 1.1547e+05 1.0 7.63e+13 1.1 8.6e+07 3.5e+02 1.1e+07 99100100 96100 99100100 96100 2593 PCSetUp 5040 1.0 5.0914e+02 1.0 1.01e+09 1.0 8.0e+01 2.8e+04 1.4e+04 0 0 0 0 0 0 0 0 0 0 8 PCApply 1584871 1.0 9.0297e+04 1.0 5.83e+13 1.1 7.6e+07 3.8e+02 8.0e+06 77 76 89 93 71 77 76 89 93 71 2535 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Vector 131 131 66031328 0 Vector Scatter 9 9 9124 0 Matrix 14 14 91100348 0 Distributed Mesh 2 2 336256 0 Star Forest Bipartite Graph 4 4 3200 0 Index Set 27 27 336288 0 IS L to G Mapping 2 2 1176 0 TSAdapt 1 1 1208 0 TS 1 1 1296 0 DMTS 1 1 712 0 SNES 1 1 1340 0 SNESLineSearch 1 1 872 0 DMSNES 1 1 672 0 Krylov Solver 5 5 52688 0 DMKSP interface 1 1 656 0 Preconditioner 5 5 4608 0 Viewer 2 1 744 0 ======================================================================================================================== Average time to get PetscTime(): 4.76837e-08 Average time for MPI_Barrier(): 1.4782e-06 Average time for zero size MPI_Send(): 2.20537e-06 #PETSc Option Table entries: -da_grid_x 51 -fieldsplit_0_pc_type redundant -fieldsplit_1_ksp_max_it 4 -fieldsplit_1_ksp_type chebyshev -fieldsplit_1_pc_type jacobi -ksp_type fgmres -log_summary -pc_fieldsplit_detect_coupling -pc_type fieldsplit -ts_adapt_dt_max 10 -ts_dt 1.0e-12 -ts_final_time 1000 -ts_max_snes_failures 200 -ts_max_steps 100 -ts_monitor -ts_view #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/home/bqo/Code/petsc_mpich-3.1 --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif77 --with-debugging=no --download-fblaslapack=1 --FOPTFLAGS= --with-shared-libraries=1 --download-hypre=yes --with-debugging=0 --download-superlu_dist --download-parmetis --download-metis --with-c2html=0 ----------------------------------------- Libraries compiled on Wed Jul 9 16:55:45 2014 on lap98867.ornl.gov Machine characteristics: Linux-3.15.3-200.fc20.x86_64-x86_64-with-fedora-20-Heisenbug Using PETSc directory: /home/bqo/Code/petsc Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif77 -fPIC -Wall -Wno-unused-variable -Wno-unused-dummy-argument ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/home/bqo/Code/petsc/arch-linux2-c-opt/include -I/home/bqo/Code/petsc/include -I/home/bqo/Code/petsc/include -I/home/bqo/Code/petsc/arch-linux2-c-opt/include -I/home/bqo/Code/mpich-3.1/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif77 Using libraries: -Wl,-rpath,/home/bqo/Code/petsc/arch-linux2-c-opt/lib -L/home/bqo/Code/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/bqo/Code/petsc/arch-linux2-c-opt/lib -L/home/bqo/Code/petsc/arch-linux2-c-opt/lib -lsuperlu_dist_3.3 -lHYPRE -Wl,-rpath,/home/bqo/Code/mpich-3.1/lib -L/home/bqo/Code/mpich-3.1/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -lmpichcxx -lstdc++ -lflapack -lfblas -lpthread -lparmetis -lmetis -lm -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpichcxx -lstdc++ -ldl -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl -----------------------------------------