0 KSP preconditioned resid norm 9.690097288403e+05 true resid norm 2.242813827253e+12 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 3.937850586811e+04 true resid norm 4.832385607974e+09 ||r(i)||/||b|| 2.154608442865e-03 2 KSP preconditioned resid norm 3.901847579016e+04 true resid norm 6.863018106871e+09 ||r(i)||/||b|| 3.060003475758e-03 3 KSP preconditioned resid norm 1.370683743646e+04 true resid norm 1.908345320015e+08 ||r(i)||/||b|| 8.508710338890e-05 4 KSP preconditioned resid norm 1.366730141191e+04 true resid norm 6.412191918269e+08 ||r(i)||/||b|| 2.858994286710e-04 5 KSP preconditioned resid norm 7.533179207860e+03 true resid norm 3.158336345405e+08 ||r(i)||/||b|| 1.408202636807e-04 6 KSP preconditioned resid norm 7.488551932023e+03 true resid norm 5.463869258852e+08 ||r(i)||/||b|| 2.436167100657e-04 7 KSP preconditioned resid norm 3.096926517391e+03 true resid norm 9.200110163577e+07 ||r(i)||/||b|| 4.102039166954e-05 8 KSP preconditioned resid norm 3.055499647576e+03 true resid norm 2.712032138741e+08 ||r(i)||/||b|| 1.209209656988e-04 9 KSP preconditioned resid norm 1.351918393275e+03 true resid norm 1.795239708776e+07 ||r(i)||/||b|| 8.004408065269e-06 10 KSP preconditioned resid norm 1.310924073222e+03 true resid norm 8.700123946760e+07 ||r(i)||/||b|| 3.879111070675e-05 11 KSP preconditioned resid norm 6.033437590235e+02 true resid norm 1.530109834558e+07 ||r(i)||/||b|| 6.822277515706e-06 12 KSP preconditioned resid norm 5.607858057036e+02 true resid norm 5.127728713683e+07 ||r(i)||/||b|| 2.286292625529e-05 13 KSP preconditioned resid norm 1.933666011523e+02 true resid norm 2.117456895730e+06 ||r(i)||/||b|| 9.441072950416e-07 14 KSP preconditioned resid norm 1.789733578933e+02 true resid norm 1.199428516241e+07 ||r(i)||/||b|| 5.347873736402e-06 15 KSP preconditioned resid norm 7.139015462709e+01 true resid norm 1.875402850559e+06 ||r(i)||/||b|| 8.361830249887e-07 16 KSP preconditioned resid norm 6.662260723020e+01 true resid norm 3.901346015230e+06 ||r(i)||/||b|| 1.739487231541e-06 17 KSP preconditioned resid norm 2.661876341752e+01 true resid norm 1.852778690631e+06 ||r(i)||/||b|| 8.260956251107e-07 18 KSP preconditioned resid norm 2.448370162276e+01 true resid norm 1.252176931413e+06 ||r(i)||/||b|| 5.583062295217e-07 19 KSP preconditioned resid norm 1.181933951907e+01 true resid norm 9.408468088993e+05 ||r(i)||/||b|| 4.194939399191e-07 20 KSP preconditioned resid norm 1.020832096343e+01 true resid norm 3.412366304600e+05 ||r(i)||/||b|| 1.521466589485e-07 21 KSP preconditioned resid norm 4.295168815076e+00 true resid norm 4.444072009317e+05 ||r(i)||/||b|| 1.981471647498e-07 Linear solve converged due to CONVERGED_RTOL iterations 21 KSP Object: 4 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 4 MPI processes type: asm total subdomain blocks = 4, amount of overlap = 3 restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: amd factor fill ratio given 5., needed 1.02029 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=102658, cols=102658 package used to perform factorization: petsc total: nonzeros=619946, allocated nonzeros=619946 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 57602 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=102658, cols=102658 total: nonzeros=607620, allocated nonzeros=607620 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 57959 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 4 MPI processes type: mpiaij rows=320745, cols=320745 total: nonzeros=1928617, allocated nonzeros=1928617 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 39919 nodes, limit used is 5 Time: 8.461e-01 seconds ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /lustre/eaglefs/projects/dss/./main on a arch-intel-complex-opt named r7i7n29 with 4 processors, by jchang Wed Feb 6 13:45:42 2019 Using Petsc Development GIT revision: v3.10.3-1312-g058c394 GIT Date: 2019-01-23 16:37:18 -0600 Max Max/Min Avg Total Time (sec): 9.145e-01 1.000 9.145e-01 Objects: 1.110e+02 1.000 1.110e+02 Flop: 8.983e+08 1.167 8.114e+08 3.246e+09 Flop/sec: 9.823e+08 1.167 8.873e+08 3.549e+09 MPI Messages: 2.640e+02 1.048 2.550e+02 1.020e+03 MPI Message Lengths: 7.177e+07 2.688 1.685e+05 1.719e+08 MPI Reductions: 1.200e+02 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 9.1448e-01 100.0% 3.2456e+09 100.0% 1.020e+03 100.0% 1.685e+05 100.0% 1.090e+02 90.8% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSidedF 2 1.0 1.4912e-02363.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatMult 43 1.0 1.6279e-01 1.0 1.68e+08 1.2 5.2e+02 5.5e+04 0.0e+00 17 19 51 16 0 17 19 51 16 0 3737 MatSolve 22 1.0 2.2442e-01 2.4 2.00e+08 2.0 0.0e+00 0.0e+00 0.0e+00 15 17 0 0 0 15 17 0 0 0 2390 MatLUFactorSym 1 1.0 2.7771e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatLUFactorNum 1 1.0 4.4804e-02 1.9 1.52e+07 2.1 0.0e+00 0.0e+00 0.0e+00 3 1 0 0 0 3 1 0 0 0 901 MatAssemblyBegin 2 1.0 1.3394e-02216.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatAssemblyEnd 2 1.0 1.0253e-02 1.2 0.00e+00 0.0 2.4e+01 6.8e+03 8.0e+00 1 0 2 0 7 1 0 2 0 7 0 MatGetRowIJ 1 1.0 6.1040e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatCreateSubMats 1 1.0 7.1835e-02 1.3 0.00e+00 0.0 6.0e+01 5.2e+05 1.0e+00 7 0 6 18 1 7 0 6 18 1 0 MatGetOrdering 1 1.0 3.0625e-02 2.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 3.7189e-02 1.0 0.00e+00 0.0 9.6e+01 2.5e+04 3.0e+00 4 0 9 1 2 4 0 9 1 3 0 MatLoad 1 1.0 5.7708e-02 1.0 0.00e+00 0.0 3.3e+01 9.2e+05 1.6e+01 6 0 3 18 13 6 0 3 18 15 0 MatView 3 3.0 6.7306e-0411.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 1 0 0 0 0 1 0 VecMDot 21 1.0 1.8860e-01 5.9 1.48e+08 1.0 0.0e+00 0.0e+00 2.1e+01 15 18 0 0 18 15 18 0 0 19 3143 VecNorm 45 1.0 2.3751e-02 4.1 2.89e+07 1.0 0.0e+00 0.0e+00 4.5e+01 2 4 0 0 38 2 4 0 0 41 4862 VecScale 22 1.0 3.8509e-03 1.1 7.06e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 7330 VecCopy 23 1.0 5.2576e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 94 1.0 3.8174e-02 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecAXPY 22 1.0 2.7189e-03 1.0 1.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 20762 VecAYPX 22 1.0 4.3449e-03 1.2 7.06e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 6496 VecMAXPY 43 1.0 6.3260e-02 1.3 3.10e+08 1.0 0.0e+00 0.0e+00 0.0e+00 6 38 0 0 0 6 38 0 0 0 19592 VecAssemblyBegin 1 1.0 1.5800e-0388.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 1 1.0 8.8215e-06 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecLoad 1 1.0 4.3108e-03 1.0 0.00e+00 0.0 3.0e+00 1.3e+06 7.0e+00 0 0 0 2 6 0 0 0 2 6 0 VecScatterBegin 131 1.0 4.5128e-02 1.9 0.00e+00 0.0 7.8e+02 1.3e+05 0.0e+00 3 0 76 60 0 3 0 76 60 0 0 VecScatterEnd 87 1.0 1.2307e-01 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 10 0 0 0 0 10 0 0 0 0 0 VecNormalize 22 1.0 1.7802e-02 3.2 2.12e+07 1.0 0.0e+00 0.0e+00 2.2e+01 1 3 0 0 18 1 3 0 0 20 4756 KSPSetUp 2 1.0 2.5942e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 8.4587e-01 1.0 8.98e+08 1.2 9.6e+02 1.4e+05 7.7e+01 92100 94 80 64 92100 94 80 71 3837 KSPGMRESOrthog 21 1.0 2.1711e-01 3.4 2.96e+08 1.0 0.0e+00 0.0e+00 2.1e+01 18 37 0 0 18 18 37 0 0 19 5460 PCSetUp 2 1.0 2.2616e-01 1.3 1.52e+07 2.1 1.8e+02 1.9e+05 1.1e+01 21 1 18 20 9 21 1 18 20 10 179 PCSetUpOnBlocks 1 1.0 1.0328e-01 2.2 1.52e+07 2.1 0.0e+00 0.0e+00 0.0e+00 7 1 0 0 0 7 1 0 0 0 391 PCApply 22 1.0 3.1748e-01 1.5 2.00e+08 2.0 2.6e+02 2.8e+05 0.0e+00 27 17 26 44 0 27 17 26 44 0 1690 PCApplyOnBlocks 22 1.0 2.3784e-01 2.3 2.00e+08 2.0 0.0e+00 0.0e+00 0.0e+00 16 17 0 0 0 16 17 0 0 0 2255 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 5 4 3360 0. Matrix 5 5 39223468 0. Vector 78 78 99126760 0. Index Set 15 15 3650816 0. IS L to G Mapping 1 1 1694184 0. Vec Scatter 3 3 644856 0. Krylov Solver 2 2 36864 0. Preconditioner 2 2 1984 0. ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 3.00407e-06 Average time for zero size MPI_Send(): 7.15256e-07 #PETSc Option Table entries: -A DS3_urbansuburban.matrix -b DS3_urbansuburban.vector -ksp_converged_reason -ksp_monitor_true_residual -ksp_type gmres -ksp_view -log_view -pc_asm_overlap 3 -pc_type asm -sub_pc_factor_mat_ordering_type amd -sub_pc_type lu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4 Configure options: --COPTFLAGS="-g -xCORE-AVX512 -O3" --CXXOPTFLAGS="-g -xCORE-AVX512 -O3" --FOPTFLAGS="-g -xCORE-AVX512 -O3" --download-hwloc=1 --download-metis --download-mumps --download-parmetis --download-scalapack --download-suitesparse --download-zlib --with-avx512-kernels=1 --with-blaslapack-dir=/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl --with-cc=mpiicc --with-cxx=mpiicpc --with-debugging=0 --with-fc=mpiifort --with-mpiexec=srun --with-openmp=1 --with-scalar-type=complex --with-shared-libraries=1 PETSC_ARCH=arch-intel-complex-opt ----------------------------------------- Libraries compiled on 2019-02-01 22:32:57 on el1 Machine characteristics: Linux-3.10.0-693.el7.x86_64-x86_64-with-centos-7.4.1708-Core Using PETSc directory: /lustre/eaglefs/projects/dss/petsc-dev Using PETSc arch: arch-intel-complex-opt ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -g -xCORE-AVX512 -O3 -fopenmp Using Fortran compiler: mpiifort -fPIC -g -xCORE-AVX512 -O3 -fopenmp ----------------------------------------- Using include paths: -I/lustre/eaglefs/projects/dss/petsc-dev/include -I/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -L/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -lpetsc -Wl,-rpath,/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -L/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -Wl,-rpath,/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib/debug_mt -L/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib -L/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64_lin -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/lib -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/lib -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/ipp/lib/intel64 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/ipp/lib/intel64 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64/gcc4.7 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/daal/lib/intel64_lin -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/daal/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64_lin/gcc4.4 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/lib -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/lib -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib64 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib64 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lparmetis -lmetis -lz -lX11 -lhwloc -lstdc++ -ldl -lmpifort -lmpi -lmpigi -lrt -lpthread -lifport -lifcoremt_pic -limf -lsvml -lm -lipgo -lirc -lgcc_s -lirc_s -lstdc++ -ldl -----------------------------------------