0 SNES Function norm 1.030411923746e+00 0 KSP Residual norm 2.639809157881e+02 1 KSP Residual norm 3.426830772315e+00 2 KSP Residual norm 4.350281531627e-02 3 KSP Residual norm 9.635967010576e-04 4 KSP Residual norm 1.522711400431e-05 5 KSP Residual norm 2.065527838679e-07 1 SNES Function norm 3.234968263664e-05 0 KSP Residual norm 2.075824728329e+01 1 KSP Residual norm 1.492330533819e-02 2 KSP Residual norm 1.732640368611e-04 3 KSP Residual norm 2.356574067538e-06 4 KSP Residual norm 2.195849739475e-08 5 KSP Residual norm 3.295422375197e-10 2 SNES Function norm 2.483837030905e-07 0 KSP Residual norm 1.700587405286e-01 1 KSP Residual norm 1.043352193807e-04 2 KSP Residual norm 7.235659719859e-07 3 KSP Residual norm 7.267291018107e-09 4 KSP Residual norm 9.946160948315e-11 3 SNES Function norm 1.683331889260e-11 SNES Object: 16 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=14 total number of function evaluations=4 norm schedule ALWAYS SNESLineSearch Object: 16 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 16 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-09, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: mg MG: type is MULTIPLICATIVE, levels=8 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: redundant Redundant preconditioner: First (color=0) of 16 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 4.36162 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=441, cols=441 package used to perform factorization: petsc total: nonzeros=9251, allocated nonzeros=9251 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=441, cols=441 total: nonzeros=2121, allocated nonzeros=2121 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=441, cols=441 total: nonzeros=2121, allocated nonzeros=2121 total number of mallocs used during MatSetValues calls =0 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=1681, cols=1681 total: nonzeros=8241, allocated nonzeros=8241 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=6561, cols=6561 total: nonzeros=32481, allocated nonzeros=32481 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=25921, cols=25921 total: nonzeros=128961, allocated nonzeros=128961 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=103041, cols=103041 total: nonzeros=513921, allocated nonzeros=513921 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=410881, cols=410881 total: nonzeros=2.05184e+06, allocated nonzeros=2.05184e+06 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 6 ------------------------------- KSP Object: (mg_levels_6_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_6_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=1640961, cols=1640961 total: nonzeros=8.19968e+06, allocated nonzeros=8.19968e+06 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 7 ------------------------------- KSP Object: (mg_levels_7_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_7_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=6558721, cols=6558721 total: nonzeros=3.27834e+07, allocated nonzeros=3.27834e+07 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=6558721, cols=6558721 total: nonzeros=3.27834e+07, allocated nonzeros=3.27834e+07 total number of mallocs used during MatSetValues calls =0 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex5 on a arch-linux2-c-opt named helios91 with 16 processors, by tnicolas Thu Oct 15 13:59:21 2015 Using Petsc Release Version 3.6.0, Jun, 09, 2015 Max Max/Min Avg Total Time (sec): 3.651e+00 1.00032 3.650e+00 Objects: 7.330e+02 1.00000 7.330e+02 Flops: 8.979e+08 1.00416 8.954e+08 1.433e+10 Flops/sec: 2.460e+08 1.00384 2.453e+08 3.925e+09 MPI Messages: 3.419e+03 1.98664 2.588e+03 4.140e+04 MPI Message Lengths: 6.014e+06 1.45667 1.964e+03 8.133e+07 MPI Reductions: 1.245e+03 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 3.6497e+00 100.0% 1.4326e+10 100.0% 4.140e+04 100.0% 1.964e+03 100.0% 1.244e+03 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 3.3621e+00 1.0 8.98e+08 1.0 4.1e+04 2.0e+03 1.2e+03 92100 99100 96 92100 99100 96 4261 SNESFunctionEval 4 1.0 4.6137e-02 1.0 1.81e+07 1.0 1.9e+02 5.1e+03 0.0e+00 1 2 0 1 0 1 2 0 1 0 6255 SNESJacobianEval 24 1.0 4.1735e-01 1.0 0.00e+00 0.0 1.2e+03 1.3e+03 4.8e+01 11 0 3 2 4 11 0 3 2 4 0 SNESLineSearch 3 1.0 8.8794e-02 1.0 3.82e+07 1.0 2.9e+02 5.1e+03 1.2e+01 2 4 1 2 1 2 4 1 2 1 6869 VecDot 3 1.0 4.7750e-03 1.3 2.47e+06 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 8241 VecMDot 14 1.0 4.8102e-02 2.2 3.29e+07 1.0 0.0e+00 0.0e+00 1.4e+01 1 4 0 0 1 1 4 0 0 1 10908 VecNorm 24 1.0 3.2932e-02 1.3 1.97e+07 1.0 0.0e+00 0.0e+00 2.4e+01 1 2 0 0 2 1 2 0 0 2 9560 VecScale 374 1.0 1.2402e-02 1.1 7.22e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 9241 VecCopy 9 1.0 1.9033e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 502 1.0 3.6254e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 3 1.0 5.6610e-03 1.1 2.47e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 6951 VecAYPX 119 1.0 3.6089e-02 1.5 9.33e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 4121 VecWAXPY 3 1.0 7.8232e-03 1.0 1.23e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2515 VecMAXPY 17 1.0 6.6202e-02 1.0 4.44e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 5 0 0 0 2 5 0 0 0 10700 VecPointwiseMult 21 1.0 1.1404e-03 1.2 4.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5760 VecScatterBegin 828 1.0 3.6472e-02 1.2 0.00e+00 0.0 3.8e+04 1.2e+03 0.0e+00 1 0 91 56 0 1 0 91 56 0 0 VecScatterEnd 828 1.0 6.7017e-02 7.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecReduceArith 6 1.0 4.7569e-03 1.2 4.93e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 16545 VecReduceComm 3 1.0 4.8470e-04 8.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 17 1.0 2.8440e-02 1.3 2.10e+07 1.0 0.0e+00 0.0e+00 1.7e+01 1 2 0 0 1 1 2 0 0 1 11761 MatMult 136 1.0 3.1470e-01 1.1 1.47e+08 1.0 6.5e+03 1.9e+03 0.0e+00 8 16 16 15 0 8 16 16 15 0 7438 MatMultAdd 119 1.0 9.3542e-02 1.0 4.19e+07 1.0 3.9e+03 5.3e+02 0.0e+00 3 5 9 3 0 3 5 9 3 0 7151 MatMultTranspose 147 1.0 1.1993e-01 1.2 5.18e+07 1.0 4.9e+03 5.3e+02 0.0e+00 3 6 12 3 0 3 6 12 3 0 6890 MatSolve 17 1.0 5.3740e-04 1.1 3.07e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9141 MatSOR 238 1.0 1.9108e+00 1.0 5.14e+08 1.0 1.7e+04 1.5e+03 4.8e+02 52 57 41 31 38 52 57 41 31 38 4291 MatLUFactorSym 1 1.0 3.1710e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 3 1.0 4.9996e-04 1.1 3.48e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11124 MatCopy 2 1.0 1.0014e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 1 1.0 6.6042e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatResidual 119 1.0 2.1199e-01 1.2 9.32e+07 1.0 5.7e+03 1.5e+03 0.0e+00 5 10 14 10 0 5 10 14 10 0 7012 MatAssemblyBegin 40 1.0 1.7027e-02 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 7.8e+01 0 0 0 0 6 0 0 0 0 6 0 MatAssemblyEnd 40 1.0 9.6050e-02 1.0 0.00e+00 0.0 1.2e+03 2.5e+02 1.2e+02 3 0 3 0 10 3 0 3 0 10 0 MatGetRowIJ 1 1.0 2.3842e-05 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 3 1.0 5.1308e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatGetOrdering 1 1.0 1.6594e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 11 1.2 1.4901e-03 2.9 0.00e+00 0.0 0.0e+00 0.0e+00 9.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatRedundantMat 3 1.0 6.0582e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0 KSPGMRESOrthog 14 1.0 9.4986e-02 1.4 6.57e+07 1.0 0.0e+00 0.0e+00 1.4e+01 2 7 0 0 1 2 7 0 0 1 11048 KSPSetUp 30 1.0 1.6941e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.6e+01 0 0 0 0 3 0 0 0 0 3 0 KSPSolve 3 1.0 2.9345e+00 1.0 8.54e+08 1.0 4.1e+04 1.9e+03 1.2e+03 80 95 98 97 94 80 95 98 97 94 4645 PCSetUp 3 1.0 3.5642e-01 1.0 1.06e+07 1.0 5.2e+03 7.1e+03 6.3e+02 10 1 13 45 51 10 1 13 45 51 476 PCApply 17 1.0 2.3076e+00 1.0 6.91e+08 1.0 3.5e+04 1.1e+03 4.8e+02 63 77 84 47 38 63 77 84 47 38 4779 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage SNES 1 1 1332 0 SNESLineSearch 1 1 864 0 DMSNES 9 9 6536 0 Vector 428 428 269498720 0 Vector Scatter 40 40 11027856 0 Matrix 55 55 96291584 0 Distributed Mesh 17 17 83936 0 Star Forest Bipartite Graph 34 34 28992 0 Discrete System 17 17 14416 0 Index Set 85 85 5045560 0 IS L to G Mapping 16 16 4420040 0 Krylov Solver 10 10 29600 0 DMKSP interface 8 8 5184 0 Preconditioner 10 10 9752 0 Viewer 2 1 760 0 ======================================================================================================================== Average time to get PetscTime(): 9.53674e-08 Average time for MPI_Barrier(): 1.7643e-06 Average time for zero size MPI_Send(): 5.94556e-06 #PETSc Option Table entries: -da_grid_x 21 -da_grid_y 21 -da_refine 7 -ksp_monitor -ksp_rtol 1e-9 -log_summary -mg_levels_ksp_type richardson -pc_mg_levels 8 -pc_type mg -snes_monitor -snes_view #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/csc/softs/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real --with-debugging=0 --with-x=0 --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-fortran --known-mpi-shared-libraries=1 --with-scalar-type=real --with-precision=double --CFLAGS="-g -O3 -mavx -mkl" --CXXFLAGS="-g -O3 -mavx -mkl" --FFLAGS="-g -O3 -mavx -mkl" ----------------------------------------- Libraries compiled on Mon Sep 28 20:22:47 2015 on helios85 Machine characteristics: Linux-2.6.32-573.1.1.el6.Bull.80.x86_64-x86_64-with-redhat-6.4-Santiago Using PETSc directory: /csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0 Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: mpicc -g -O3 -mavx -mkl -fPIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif90 -g -O3 -mavx -mkl -fPIC ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/include -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/include -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/include -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/include -I/opt/mpi/bullxmpi/1.2.8.2/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/lib -L/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -lhwloc -lxml2 -lssl -lcrypto -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -lmpi_f90 -lmpi_f77 -lm -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -lmpi -lnuma -lrt -lnsl -lutil -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -limf -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -ldl -----------------------------------------