Solving a linear TS problem on 32 processors mx : 1024, my: 1024, energy(in eV) : 1.500000e+04 0 TS dt 3.0808e-06 time 0. 1 TS dt 3.0808e-06 time 3.0808e-06 2 TS dt 3.0808e-06 time 6.1616e-06 3 TS dt 3.0808e-06 time 9.2424e-06 4 TS dt 3.0808e-06 time 1.23232e-05 5 TS dt 3.0808e-06 time 1.5404e-05 6 TS dt 3.0808e-06 time 1.84848e-05 7 TS dt 3.0808e-06 time 2.15656e-05 8 TS dt 3.0808e-06 time 2.46464e-05 9 TS dt 3.0808e-06 time 2.77272e-05 10 TS dt 3.0808e-06 time 3.0808e-05 TS Object: 32 MPI processes type: cn maximum steps=10 maximum time=3.0808e-05 total number of linear solver iterations=192 total number of linear solve failures=0 total number of rejected steps=0 using relative error tolerance of 0.0001, using absolute error tolerance of 0.0001 TSAdapt Object: 32 MPI processes type: none SNES Object: 32 MPI processes type: ksponly maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=20 total number of function evaluations=1 norm schedule ALWAYS KSP Object: 32 MPI processes type: fgmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. right preconditioning using UNPRECONDITIONED norm type for convergence test PC Object: 32 MPI processes type: gamg type is MULTIPLICATIVE, levels=1 cycles=v Cycles per PCApply=1 Using externally compute Galerkin coarse grid matrices GAMG specific options Threshold for dropping small values in graph on each level = Threshold scaling factor for each level not specified = 1. Using parallel coarse grid solver (all coarse grid equations not put on one process) AGG specific options Symmetric graph false Number of levels to square graph 10 Number smoothing steps 1 Complexity: grid = 1. Coarse grid solver -- level ------------------------------- KSP Object: (mg_levels_0_) 32 MPI processes type: gmres restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement happy breakdown tolerance 1e-30 maximum iterations=2, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000. left preconditioning using NONE norm type for convergence test PC Object: (mg_levels_0_) 32 MPI processes type: jacobi linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=1048576, cols=1048576 total: nonzeros=5238784, allocated nonzeros=5238784 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Mat Object: 32 MPI processes type: mpiaij rows=1048576, cols=1048576 total: nonzeros=5238784, allocated nonzeros=5238784 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines linear system matrix followed by preconditioner matrix: Mat Object: 32 MPI processes type: mpiaij rows=1048576, cols=1048576 total: nonzeros=5238784, allocated nonzeros=5238784 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines Mat Object: 32 MPI processes type: mpiaij rows=1048576, cols=1048576 total: nonzeros=5238784, allocated nonzeros=5238784 total number of mallocs used during MatSetValues calls=0 not using I-node (on process 0) routines ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex_dmda on a named xrmlite with 32 processors, by sajid Mon May 4 09:10:28 2020 Using Petsc Release Version 3.13.0, Mar 29, 2020 Max Max/Min Avg Total Time (sec): 8.001e+00 1.000 8.001e+00 Objects: 3.410e+02 1.000 3.410e+02 Flop: 2.638e+09 1.001 2.637e+09 8.440e+10 Flop/sec: 3.297e+08 1.001 3.296e+08 1.055e+10 MPI Messages: 2.588e+03 2.000 2.103e+03 6.729e+04 MPI Message Lengths: 7.490e+06 2.000 2.968e+03 1.997e+08 MPI Reductions: 1.712e+03 1.000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flop and VecAXPY() for complex vectors of length N --> 8N flop Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total Count %Total Avg %Total Count %Total 0: Main Stage: 8.0009e+00 100.0% 8.4399e+10 100.0% 6.729e+04 100.0% 2.968e+03 100.0% 1.705e+03 99.6% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flop: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent AvgLen: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flop in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flop --- Global --- --- Stage ---- Total Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage BuildTwoSided 57 1.0 4.5350e-02 7.2 0.00e+00 0.0 2.6e+03 8.0e+00 5.7e+01 0 0 4 0 3 0 0 4 0 3 0 BuildTwoSidedF 29 1.0 4.2180e-0212.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.9e+01 0 0 0 0 2 0 0 0 0 2 0 DMCreateMat 1 1.0 5.1535e-02 1.0 0.00e+00 0.0 2.1e+02 7.9e+02 8.0e+00 1 0 0 0 0 1 0 0 0 0 0 SFSetGraph 28 1.0 4.9424e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFSetUp 28 1.0 9.0821e-03 1.2 0.00e+00 0.0 5.2e+03 7.9e+02 2.8e+01 0 0 8 2 2 0 0 8 2 2 0 SFBcastOpBegin 605 1.0 2.1203e-02 1.8 0.00e+00 0.0 6.2e+04 3.2e+03 0.0e+00 0 0 92 98 0 0 0 92 98 0 0 SFBcastOpEnd 605 1.0 4.5953e-02 7.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFPack 605 1.0 1.1555e-02 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 SFUnpack 605 1.0 6.3133e-04 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecView 1 1.0 2.5505e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0 VecMDot 576 1.0 1.2903e+00 1.1 6.60e+08 1.0 0.0e+00 0.0e+00 5.8e+02 16 25 0 0 34 16 25 0 0 34 16357 VecNorm 778 1.0 5.7720e-01 1.1 2.04e+08 1.0 0.0e+00 0.0e+00 7.8e+02 7 8 0 0 45 7 8 0 0 46 11307 VecScale 788 1.0 5.8132e-02 1.1 1.03e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 4 0 0 0 1 4 0 0 0 56855 VecCopy 232 1.0 8.0639e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 636 1.0 8.6815e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 222 1.0 6.1796e-02 1.1 5.82e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 1 2 0 0 0 30136 VecAYPX 20 1.0 6.6881e-03 1.4 2.62e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 12543 VecAXPBYCZ 10 1.0 8.7171e-03 1.1 3.93e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 14435 VecMAXPY 778 1.0 1.4468e+00 1.0 8.11e+08 1.0 0.0e+00 0.0e+00 0.0e+00 18 31 0 0 0 18 31 0 0 0 17928 VecAssemblyBegin 5 1.0 7.0810e-04 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecAssemblyEnd 5 1.0 1.4067e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecPointwiseMult 576 1.0 3.7060e-01 1.1 7.55e+07 1.0 0.0e+00 0.0e+00 0.0e+00 4 3 0 0 0 4 3 0 0 0 6519 VecLoad 1 1.0 2.6774e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecScatterBegin 602 1.0 2.3480e-02 1.8 0.00e+00 0.0 6.2e+04 3.2e+03 0.0e+00 0 0 92 98 0 0 0 92 98 0 0 VecScatterEnd 602 1.0 4.7679e-02 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 576 1.0 4.5001e-01 1.1 2.26e+08 1.0 0.0e+00 0.0e+00 5.8e+02 5 9 0 0 34 5 9 0 0 34 16106 MatMult 596 1.0 2.0180e+00 1.0 7.03e+08 1.0 6.2e+04 3.2e+03 0.0e+00 25 27 92 98 0 25 27 92 98 0 11139 MatConvert 3 1.0 2.0094e-02 1.1 0.00e+00 0.0 6.2e+02 7.9e+02 3.0e+00 0 0 1 0 0 0 0 1 0 0 0 MatScale 1 1.0 5.2426e-03 1.0 1.31e+06 1.0 1.0e+02 3.2e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 7994 MatAssemblyBegin 52 1.0 4.2339e-0212.5 0.00e+00 0.0 0.0e+00 0.0e+00 2.4e+01 0 0 0 0 1 0 0 0 0 1 0 MatAssemblyEnd 52 1.0 1.6437e-01 1.0 0.00e+00 0.0 4.4e+03 7.9e+02 1.2e+02 2 0 6 2 7 2 0 6 2 7 0 MatCoarsen 1 1.0 3.5894e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatZeroEntries 21 1.0 1.5688e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 4 1.0 3.9768e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatAXPY 20 1.0 1.4123e+00 1.0 1.31e+07 1.0 4.2e+03 7.9e+02 1.6e+02 17 0 6 2 9 17 0 6 2 9 297 MatTrnMatMultSym 1 1.0 2.7097e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatTrnMatMultNum 1 1.0 3.9880e-02 1.0 3.93e+05 1.0 0.0e+00 0.0e+00 6.0e+00 0 0 0 0 0 0 0 0 0 0 316 MatGetLocalMat 1 1.0 4.3018e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 TSStep 10 1.0 7.6184e+00 1.0 2.64e+09 1.0 6.6e+04 3.0e+03 1.7e+03 95100 99100 97 95100 99100 97 11078 TSFunctionEval 20 1.0 8.1433e-02 1.1 2.62e+07 1.0 2.1e+03 3.2e+03 0.0e+00 1 1 3 3 0 1 1 3 3 0 10293 TSJacobianEval 30 1.0 1.5513e+00 1.0 1.57e+07 1.0 4.2e+03 7.9e+02 1.6e+02 19 1 6 2 9 19 1 6 2 9 324 SNESSolve 10 1.0 7.5688e+00 1.0 2.62e+09 1.0 6.5e+04 3.0e+03 1.7e+03 95 99 97 98 97 95 99 97 98 97 11090 SNESSetUp 1 1.0 5.1498e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 SNESFunctionEval 10 1.0 4.8973e-02 1.1 1.70e+07 1.0 1.0e+03 3.2e+03 0.0e+00 1 1 2 2 0 1 1 2 2 0 11127 SNESJacobianEval 10 1.0 1.5513e+00 1.0 1.57e+07 1.0 4.2e+03 7.9e+02 1.6e+02 19 1 6 2 9 19 1 6 2 9 324 KSPSetUp 20 1.0 7.1268e-03 1.9 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 1 0 0 0 0 1 0 KSPSolve 10 1.0 5.9780e+00 1.0 2.59e+09 1.0 6.0e+04 3.1e+03 1.5e+03 75 98 89 95 87 75 98 89 95 88 13838 KSPGMRESOrthog 576 1.0 2.4704e+00 1.0 1.32e+09 1.0 0.0e+00 0.0e+00 5.8e+02 30 50 0 0 34 30 50 0 0 34 17087 PCGAMGGraph_AGG 1 1.0 5.3833e-02 1.0 1.31e+06 1.0 3.1e+02 1.6e+03 1.0e+01 1 0 0 0 1 1 0 0 0 1 779 PCGAMGCoarse_AGG 1 1.0 9.1122e-02 1.0 3.93e+05 1.0 0.0e+00 0.0e+00 2.4e+01 1 0 0 0 1 1 0 0 0 1 138 PCGAMGProl_AGG 1 1.0 1.9858e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0 GAMG: createProl 1 1.0 1.5640e-01 1.0 1.70e+06 1.0 3.1e+02 1.6e+03 3.8e+01 2 0 0 0 2 2 0 0 0 2 348 Graph 2 1.0 5.3743e-02 1.0 1.31e+06 1.0 3.1e+02 1.6e+03 1.0e+01 1 0 0 0 1 1 0 0 0 1 780 MIS/Agg 1 1.0 3.6304e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 10 1.0 1.8596e-01 1.0 1.70e+06 1.0 3.1e+02 1.6e+03 4.1e+01 2 0 0 0 2 2 0 0 0 2 293 PCApply 192 1.0 3.0867e+00 1.0 1.21e+09 1.0 4.0e+04 3.2e+03 9.6e+02 38 46 59 63 56 38 46 59 63 56 12519 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Distributed Mesh 3 2 10176 0. Index Set 49 49 376376 0. IS L to G Mapping 1 0 0 0. Star Forest Graph 34 31 37304 0. Discrete System 3 2 1984 0. Vec Scatter 27 26 22464 0. Vector 120 120 34965504 0. Viewer 4 3 2592 0. Matrix 86 86 172621664 0. Matrix Coarsen 1 1 684 0. TSAdapt 1 1 1448 0. TS 1 1 2472 0. DMTS 1 1 808 0. SNES 1 1 1532 0. DMSNES 3 3 2160 0. Krylov Solver 2 2 71296 0. DMKSP interface 1 1 704 0. Preconditioner 2 2 2664 0. PetscRandom 1 1 710 0. ======================================================================================================================== Average time to get PetscTime(): 7.15256e-08 Average time for MPI_Barrier(): 1.01089e-05 Average time for zero size MPI_Send(): 2.5779e-06 #PETSc Option Table entries: -ksp_rtol 1e-5 -ksp_type fgmres -log_view -mg_levels_ksp_type gmres -mg_levels_pc_type jacobi -pc_gamg_coarse_eq_limit 1000 -pc_gamg_reuse_interpolation true -pc_gamg_square_graph 10 -pc_gamg_threshold -0.0 -pc_gamg_type agg -pc_gamg_use_parallel_coarse_grid_solver -pc_type gamg -prop_steps 10 -ts_monitor -ts_type cn #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with 64 bit PetscInt Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 8 Configure options: --prefix=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ftkroqhobltwn5attjryx35pvhmfnwvp --with-ssl=0 --download-c2html=0 --download-sowing=0 --download-hwloc=0 CFLAGS= FFLAGS= CXXFLAGS= --with-cc=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpicc --with-cxx=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpic++ --with-fc=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpif90 --with-precision=double --with-scalar-type=complex --with-shared-libraries=1 --with-debugging=0 --with-64-bit-indices=1 COPTFLAGS= FOPTFLAGS= CXXOPTFLAGS= --with-blaslapack-lib="/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_intel_lp64.so /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_sequential.so /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64/libmkl_core.so /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so" --with-x=0 --with-clanguage=C --with-scalapack=0 --with-metis=1 --with-metis-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv --with-hdf5=1 --with-hdf5-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw --with-hypre=0 --with-parmetis=1 --with-parmetis-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h --with-mumps=0 --with-trilinos=0 --with-fftw=1 --with-fftw-dir=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/fftw-3.3.8-hlwkmpmr5pdrsrib63zmfqh5n5nga35w --with-valgrind=0 --with-cxx-dialect=C++11 --with-superlu_dist-include=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/include --with-superlu_dist-lib=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/lib/libsuperlu_dist.a --with-superlu_dist=1 --with-suitesparse=0 --with-zlib-include=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/include --with-zlib-lib=/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/lib/libz.so --with-zlib=1 ----------------------------------------- Libraries compiled on 2020-04-02 23:45:53 on xrmlite Machine characteristics: Linux-4.18.0-147.5.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core Using PETSc directory: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ftkroqhobltwn5attjryx35pvhmfnwvp Using PETSc arch: ----------------------------------------- Using C compiler: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpicc -fPIC Using Fortran compiler: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpif90 -fPIC ----------------------------------------- Using include paths: -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ftkroqhobltwn5attjryx35pvhmfnwvp/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/fftw-3.3.8-hlwkmpmr5pdrsrib63zmfqh5n5nga35w/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv/include -I/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/include ----------------------------------------- Using C linker: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpicc Using Fortran linker: /home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/bin/mpif90 Using libraries: -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ftkroqhobltwn5attjryx35pvhmfnwvp/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/petsc-3.13.0-ftkroqhobltwn5attjryx35pvhmfnwvp/lib -lpetsc -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/superlu-dist-6.3.0-suzf4hdgfgdjpblojcglmp7wc2wcjepk/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/fftw-3.3.8-hlwkmpmr5pdrsrib63zmfqh5n5nga35w/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/fftw-3.3.8-hlwkmpmr5pdrsrib63zmfqh5n5nga35w/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64 -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/intel-mkl-2020.0.166-xcdij7v4hccrboxlwsyrjnarehyaauzt/compilers_and_libraries_2020.0.166/linux/mkl/lib/intel64 /lib64/libpthread.so /lib64/libm.so /lib64/libdl.so -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/hdf5-1.10.6-u2yapuygssqkrvo7qcihw66kzlg3ngtw/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/parmetis-4.0.3-vxj3qtfmtdzyzyg2t3e224gocvgabu4h/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/metis-5.1.0-cc5mnza4r4hdocybr7hgnaa55qdygegv/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/zlib-1.2.11-fjzlxw5lmcb2y4s6ca2e4su4qteufcm7/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.3.0/mpich-3.3.2-oxccmmod4vmpmxsz47se5pjxnsyy5kdt/lib -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib:/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib64 -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib/gcc/x86_64-pc-linux-gnu/8.3.0 -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib/gcc/x86_64-pc-linux-gnu/8.3.0 -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib64 -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib64 -Wl,-rpath,/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib -L/home/sajid/packages/spack/opt/spack/linux-centos8-broadwell/gcc-8.2.1/gcc-8.3.0-j573htph2tblzijltjxvql7hkkzzkpyn/lib -lsuperlu_dist -lfftw3_mpi -lfftw3 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lparmetis -lmetis -lm -lz -lstdc++ -ldl -lmpifort -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lstdc++ -ldl -----------------------------------------