0 KSP preconditioned resid norm 1.094526974038e+06 true resid norm 2.242813827253e+12 ||r(i)||/||b|| 1.000000000000e+00 1 KSP preconditioned resid norm 2.280803511178e+04 true resid norm 2.057127398752e+09 ||r(i)||/||b|| 9.172082737120e-04 2 KSP preconditioned resid norm 2.269478294536e+04 true resid norm 3.021759719398e+09 ||r(i)||/||b|| 1.347307423684e-03 3 KSP preconditioned resid norm 6.678958399423e+03 true resid norm 3.944293046520e+08 ||r(i)||/||b|| 1.758635959254e-04 4 KSP preconditioned resid norm 6.640020512680e+03 true resid norm 3.665673810532e+08 ||r(i)||/||b|| 1.634408423022e-04 5 KSP preconditioned resid norm 2.277107557258e+03 true resid norm 5.636082207038e+07 ||r(i)||/||b|| 2.512951426709e-05 6 KSP preconditioned resid norm 2.259010532307e+03 true resid norm 1.040005719800e+08 ||r(i)||/||b|| 4.637057731510e-05 7 KSP preconditioned resid norm 6.576311313106e+02 true resid norm 1.367217965442e+07 ||r(i)||/||b|| 6.095994009080e-06 8 KSP preconditioned resid norm 6.532375364679e+02 true resid norm 2.002235240298e+07 ||r(i)||/||b|| 8.927335902644e-06 9 KSP preconditioned resid norm 2.188103358114e+02 true resid norm 3.141022900739e+06 ||r(i)||/||b|| 1.400483117489e-06 10 KSP preconditioned resid norm 2.171363769764e+02 true resid norm 5.354021080824e+06 ||r(i)||/||b|| 2.387189260101e-06 11 KSP preconditioned resid norm 6.830433692324e+01 true resid norm 8.175252908776e+05 ||r(i)||/||b|| 3.645087616920e-07 12 KSP preconditioned resid norm 6.784303564575e+01 true resid norm 1.595891974666e+06 ||r(i)||/||b|| 7.115579346239e-07 13 KSP preconditioned resid norm 2.142415575685e+01 true resid norm 2.164849466549e+05 ||r(i)||/||b|| 9.652381487233e-08 14 KSP preconditioned resid norm 2.127637753397e+01 true resid norm 3.873223680185e+05 ||r(i)||/||b|| 1.726948368661e-07 15 KSP preconditioned resid norm 7.136194601279e+00 true resid norm 6.455037979850e+04 ||r(i)||/||b|| 2.878097995212e-08 Linear solve converged due to CONVERGED_RTOL iterations 15 KSP Object: 2 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 2 MPI processes type: asm total subdomain blocks = 2, amount of overlap = 3 restriction/interpolation type - RESTRICT Local solve is same for all blocks, in the following KSP and PC objects: KSP Object: (sub_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (sub_) 1 MPI processes type: lu out-of-place factorization tolerance for zero pivot 2.22045e-14 matrix ordering: amd factor fill ratio given 5., needed 1.0211 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=198174, cols=198174 package used to perform factorization: petsc total: nonzeros=1202430, allocated nonzeros=1202430 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 111401 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=198174, cols=198174 total: nonzeros=1177586, allocated nonzeros=1177586 total number of mallocs used during MatSetValues calls =0 using I-node routines: found 112020 nodes, limit used is 5 linear system matrix = precond matrix: Mat Object: 2 MPI processes type: mpiaij rows=320745, cols=320745 total: nonzeros=1928617, allocated nonzeros=1928617 total number of mallocs used during MatSetValues calls =0 using I-node (on process 0) routines: found 79618 nodes, limit used is 5 Time: 7.159e-01 seconds ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- /lustre/eaglefs/projects/dss/./main on a arch-intel-complex-opt named r7i7n29 with 2 processors, by jchang Wed Feb 6 13:45:36 2019 Using Petsc Development GIT revision: v3.10.3-1312-g058c394 GIT Date: 2019-01-23 16:37:18 -0600 Max Max/Min Avg Total Time (sec): 8.035e-01 1.000 8.035e-01 Objects: 9.900e+01 1.000 9.900e+01 Flop: 9.763e+08 1.047 9.542e+08 1.908e+09 Flop/sec: 1.215e+09 1.047 1.188e+09 2.375e+09 MPI Messages: 7.000e+01 1.000 7.000e+01 1.400e+02 MPI Message Lengths: 3.802e+07 1.000 5.432e+05 7.605e+07 MPI Reductions: 1.020e+02 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 8.0349e-01 100.0% 1.9084e+09 100.0% 1.400e+02 100.0% 5.432e+05 100.0% 9.100e+01 89.2% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSidedF 2 1.0 1.4727e-02210.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatMult 31 1.0 1.5948e-01 1.0 2.26e+08 1.1 6.2e+01 1.5e+05 0.0e+00 20 23 44 12 0 20 23 44 12 0 2750 MatSolve 16 1.0 1.4488e-01 1.2 1.68e+08 1.2 0.0e+00 0.0e+00 0.0e+00 17 16 0 0 0 17 16 0 0 0 2136 MatLUFactorSym 1 1.0 2.5300e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 MatLUFactorNum 1 1.0 4.8883e-02 1.3 1.74e+07 1.2 0.0e+00 0.0e+00 0.0e+00 5 2 0 0 0 5 2 0 0 0 642 MatAssemblyBegin 2 1.0 1.4751e-02149.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatAssemblyEnd 2 1.0 1.0958e-02 1.1 0.00e+00 0.0 4.0e+00 1.8e+04 8.0e+00 1 0 3 0 8 1 0 3 0 9 0 MatGetRowIJ 1 1.0 6.3431e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatCreateSubMats 1 1.0 5.8061e-02 1.3 0.00e+00 0.0 1.0e+01 1.5e+06 1.0e+00 6 0 7 20 1 6 0 7 20 1 0 MatGetOrdering 1 1.0 2.9878e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 MatIncreaseOvrlp 1 1.0 5.1569e-02 1.1 0.00e+00 0.0 1.6e+01 7.1e+04 3.0e+00 6 0 11 1 3 6 0 11 1 3 0 MatLoad 1 1.0 7.5218e-02 1.0 0.00e+00 0.0 7.0e+00 2.9e+06 1.6e+01 9 0 5 27 16 9 0 5 27 18 0 MatView 3 3.0 3.1781e-04 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 1 0 0 0 0 1 0 VecMDot 15 1.0 4.6017e-02 1.9 1.54e+08 1.0 0.0e+00 0.0e+00 1.5e+01 4 16 0 0 15 4 16 0 0 16 6691 VecNorm 33 1.0 6.4816e-03 1.2 4.23e+07 1.0 0.0e+00 0.0e+00 3.3e+01 1 4 0 0 32 1 4 0 0 36 13064 VecScale 16 1.0 4.5626e-03 1.0 1.03e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 4499 VecCopy 17 1.0 5.8126e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 70 1.0 2.7972e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecAXPY 16 1.0 3.6075e-03 1.0 2.05e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 11381 VecAYPX 16 1.0 4.9274e-03 1.0 1.03e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 4166 VecMAXPY 31 1.0 4.8957e-02 1.0 3.27e+08 1.0 0.0e+00 0.0e+00 0.0e+00 6 34 0 0 0 6 34 0 0 0 13365 VecAssemblyBegin 1 1.0 4.1962e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 1 1.0 3.0994e-06 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecLoad 1 1.0 5.3160e-03 1.0 0.00e+00 0.0 1.0e+00 2.6e+06 7.0e+00 1 0 1 3 7 1 0 1 3 8 0 VecScatterBegin 95 1.0 3.0675e-02 1.0 0.00e+00 0.0 9.4e+01 3.8e+05 0.0e+00 4 0 67 47 0 4 0 67 47 0 0 VecScatterEnd 63 1.0 3.6302e-02 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecNormalize 16 1.0 9.0158e-03 1.2 3.08e+07 1.0 0.0e+00 0.0e+00 1.6e+01 1 3 0 0 16 1 3 0 0 18 6831 KSPSetUp 2 1.0 4.7081e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 KSPSolve 1 1.0 7.1565e-01 1.0 9.76e+08 1.0 1.2e+02 4.3e+05 5.9e+01 89100 89 70 58 89100 89 70 65 2667 KSPGMRESOrthog 15 1.0 6.9883e-02 1.5 3.08e+08 1.0 0.0e+00 0.0e+00 1.5e+01 7 32 0 0 15 7 32 0 0 16 8812 PCSetUp 2 1.0 2.2181e-01 1.1 1.74e+07 1.2 3.0e+01 5.6e+05 1.1e+01 26 2 21 22 11 26 2 21 22 12 141 PCSetUpOnBlocks 1 1.0 1.0413e-01 1.3 1.74e+07 1.2 0.0e+00 0.0e+00 0.0e+00 12 2 0 0 0 12 2 0 0 0 301 PCApply 16 1.0 2.0094e-01 1.0 1.68e+08 1.2 3.2e+01 8.4e+05 0.0e+00 25 16 23 36 0 25 16 23 36 0 1540 PCApplyOnBlocks 16 1.0 1.5232e-01 1.2 1.68e+08 1.2 0.0e+00 0.0e+00 0.0e+00 18 16 0 0 0 18 16 0 0 0 2032 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Viewer 5 4 3360 0. Matrix 5 5 76760148 0. Vector 66 66 166878088 0. Index Set 15 15 7097924 0. IS L to G Mapping 1 1 2076340 0. Vec Scatter 3 3 1286344 0. Krylov Solver 2 2 36864 0. Preconditioner 2 2 1984 0. ======================================================================================================================== Average time to get PetscTime(): 0. Average time for MPI_Barrier(): 7.62939e-07 Average time for zero size MPI_Send(): 9.53674e-07 #PETSc Option Table entries: -A DS3_urbansuburban.matrix -b DS3_urbansuburban.vector -ksp_converged_reason -ksp_monitor_true_residual -ksp_type gmres -ksp_view -log_view -pc_asm_overlap 3 -pc_type asm -sub_pc_factor_mat_ordering_type amd -sub_pc_type lu #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4 Configure options: --COPTFLAGS="-g -xCORE-AVX512 -O3" --CXXOPTFLAGS="-g -xCORE-AVX512 -O3" --FOPTFLAGS="-g -xCORE-AVX512 -O3" --download-hwloc=1 --download-metis --download-mumps --download-parmetis --download-scalapack --download-suitesparse --download-zlib --with-avx512-kernels=1 --with-blaslapack-dir=/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl --with-cc=mpiicc --with-cxx=mpiicpc --with-debugging=0 --with-fc=mpiifort --with-mpiexec=srun --with-openmp=1 --with-scalar-type=complex --with-shared-libraries=1 PETSC_ARCH=arch-intel-complex-opt ----------------------------------------- Libraries compiled on 2019-02-01 22:32:57 on el1 Machine characteristics: Linux-3.10.0-693.el7.x86_64-x86_64-with-centos-7.4.1708-Core Using PETSc directory: /lustre/eaglefs/projects/dss/petsc-dev Using PETSc arch: arch-intel-complex-opt ----------------------------------------- Using C compiler: mpiicc -fPIC -wd1572 -g -xCORE-AVX512 -O3 -fopenmp Using Fortran compiler: mpiifort -fPIC -g -xCORE-AVX512 -O3 -fopenmp ----------------------------------------- Using include paths: -I/lustre/eaglefs/projects/dss/petsc-dev/include -I/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/include ----------------------------------------- Using C linker: mpiicc Using Fortran linker: mpiifort Using libraries: -Wl,-rpath,/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -L/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -lpetsc -Wl,-rpath,/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -L/lustre/eaglefs/projects/dss/petsc-dev/arch-intel-complex-opt/lib -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64 -Wl,-rpath,/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib/debug_mt -L/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib/debug_mt -Wl,-rpath,/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib -L/nopt/nrel/apps/base/2018-12-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mpi-2018.3.222-6hbmyhwcn27yjvb6og6iypamd6hb3tb4/compilers_and_libraries_2018.3.222/linux/mpi/intel64/lib -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64_lin -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/compilers_and_libraries_2018.3.222/linux/mkl/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/lib -L/nopt/nrel/apps/base/2019-01-02/spack/opt/spack/linux-centos7-x86_64/intel-18.0.3/intel-mkl-2018.3.222-dzfj7xvn6uy7tqmmgzwfcjkucomyxkui/lib -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/ipp/lib/intel64 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/ipp/lib/intel64 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64/gcc4.7 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64/gcc4.7 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/daal/lib/intel64_lin -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/daal/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64_lin/gcc4.4 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/tbb/lib/intel64_lin/gcc4.4 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/lib -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/lib -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib64 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib64 -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/intel-parallel-studio-cluster.2018.3-6wq2vvslzhamadvc66fecse5bgcdhjzt/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin -Wl,-rpath,/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -L/nopt/nrel/apps/compilers/spack/opt/spack/linux-centos7-x86_64/gcc-4.8.5/gcc-7.3.0-vydnujncq3lpwhhnxmauinsqxkhxy4gn/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib/debug_mt -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lumfpack -lklu -lcholmod -lbtf -lccolamd -lcolamd -lcamd -lamd -lsuitesparseconfig -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lparmetis -lmetis -lz -lX11 -lhwloc -lstdc++ -ldl -lmpifort -lmpi -lmpigi -lrt -lpthread -lifport -lifcoremt_pic -limf -lsvml -lm -lipgo -lirc -lgcc_s -lirc_s -lstdc++ -ldl -----------------------------------------