0 SNES Function norm 1.030411923746e+00 0 KSP Residual norm 2.674796452188e+02 1 KSP Residual norm 1.208983846821e+00 2 KSP Residual norm 1.537515219367e-02 3 KSP Residual norm 2.518582537442e-04 4 KSP Residual norm 4.893820738417e-06 5 KSP Residual norm 1.162422134374e-07 1 SNES Function norm 3.234968108608e-05 0 KSP Residual norm 2.088005946736e+01 1 KSP Residual norm 1.475821829349e-03 2 KSP Residual norm 5.761996983759e-06 3 KSP Residual norm 8.148534394538e-08 4 KSP Residual norm 1.322673440920e-09 2 SNES Function norm 2.483837007278e-07 0 KSP Residual norm 1.710688907938e-01 1 KSP Residual norm 8.635505000884e-06 2 KSP Residual norm 1.084012488174e-08 3 KSP Residual norm 2.149921004144e-10 4 KSP Residual norm 5.417386519310e-12 3 SNES Function norm 1.683215699073e-11 SNES Object: 16 MPI processes type: newtonls maximum iterations=50, maximum function evaluations=10000 tolerances: relative=1e-08, absolute=1e-50, solution=1e-08 total number of linear solver iterations=13 total number of function evaluations=4 norm schedule ALWAYS SNESLineSearch Object: 16 MPI processes type: bt interpolation: cubic alpha=1.000000e-04 maxstep=1.000000e+08, minlambda=1.000000e-12 tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08 maximum iterations=40 KSP Object: 16 MPI processes type: gmres GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement GMRES: happy breakdown tolerance 1e-30 maximum iterations=10000, initial guess is zero tolerances: relative=1e-09, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: 16 MPI processes type: mg MG: type is MULTIPLICATIVE, levels=6 cycles=v Cycles per PCApply=1 Not using Galerkin computed coarse grid matrices Coarse grid solver -- level ------------------------------- KSP Object: (mg_coarse_) 16 MPI processes type: preonly maximum iterations=1, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_) 16 MPI processes type: redundant Redundant preconditioner: First (color=0) of 16 PCs follows KSP Object: (mg_coarse_redundant_) 1 MPI processes type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using NONE norm type for convergence test PC Object: (mg_coarse_redundant_) 1 MPI processes type: lu LU: out-of-place factorization tolerance for zero pivot 2.22045e-14 using diagonal shift on blocks to prevent zero pivot [INBLOCKS] matrix ordering: nd factor fill ratio given 5, needed 7.22961 Factored matrix follows: Mat Object: 1 MPI processes type: seqaij rows=6561, cols=6561 package used to perform factorization: petsc total: nonzeros=234825, allocated nonzeros=234825 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 1 MPI processes type: seqaij rows=6561, cols=6561 total: nonzeros=32481, allocated nonzeros=32481 total number of mallocs used during MatSetValues calls =0 not using I-node routines linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=6561, cols=6561 total: nonzeros=32481, allocated nonzeros=32481 total number of mallocs used during MatSetValues calls =0 Down solver (pre-smoother) on level 1 ------------------------------- KSP Object: (mg_levels_1_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_1_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=25921, cols=25921 total: nonzeros=128961, allocated nonzeros=128961 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 2 ------------------------------- KSP Object: (mg_levels_2_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_2_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=103041, cols=103041 total: nonzeros=513921, allocated nonzeros=513921 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 3 ------------------------------- KSP Object: (mg_levels_3_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_3_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=410881, cols=410881 total: nonzeros=2.05184e+06, allocated nonzeros=2.05184e+06 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 4 ------------------------------- KSP Object: (mg_levels_4_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_4_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=1640961, cols=1640961 total: nonzeros=8.19968e+06, allocated nonzeros=8.19968e+06 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) Down solver (pre-smoother) on level 5 ------------------------------- KSP Object: (mg_levels_5_) 16 MPI processes type: richardson Richardson: damping factor=1 maximum iterations=2 tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using nonzero initial guess using NONE norm type for convergence test PC Object: (mg_levels_5_) 16 MPI processes type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=6558721, cols=6558721 total: nonzeros=3.27834e+07, allocated nonzeros=3.27834e+07 total number of mallocs used during MatSetValues calls =0 Up solver (post-smoother) same as down solver (pre-smoother) linear system matrix = precond matrix: Mat Object: 16 MPI processes type: mpiaij rows=6558721, cols=6558721 total: nonzeros=3.27834e+07, allocated nonzeros=3.27834e+07 total number of mallocs used during MatSetValues calls =0 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./ex5 on a arch-linux2-c-opt named helios91 with 16 processors, by tnicolas Thu Oct 15 13:58:59 2015 Using Petsc Release Version 3.6.0, Jun, 09, 2015 Max Max/Min Avg Total Time (sec): 3.632e+00 1.00031 3.631e+00 Objects: 5.390e+02 1.00000 5.390e+02 Flops: 8.795e+08 1.00391 8.772e+08 1.403e+10 Flops/sec: 2.422e+08 1.00364 2.416e+08 3.865e+09 MPI Messages: 2.430e+03 1.92552 1.858e+03 2.973e+04 MPI Message Lengths: 6.534e+06 1.36501 3.056e+03 9.086e+07 MPI Reductions: 9.110e+02 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 3.6309e+00 100.0% 1.4035e+10 100.0% 2.973e+04 100.0% 3.056e+03 100.0% 9.100e+02 99.9% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length (bytes) Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage SNESSolve 1 1.0 3.3418e+00 1.0 8.80e+08 1.0 2.9e+04 3.1e+03 8.6e+02 92100 99100 95 92100 99100 95 4200 SNESFunctionEval 4 1.0 5.0001e-02 1.1 1.81e+07 1.0 1.9e+02 5.1e+03 0.0e+00 1 2 1 1 0 1 2 1 1 0 5772 SNESJacobianEval 18 1.0 4.6010e-01 1.0 0.00e+00 0.0 8.6e+02 1.7e+03 3.6e+01 13 0 3 2 4 13 0 3 2 4 0 SNESLineSearch 3 1.0 9.9964e-02 1.0 3.82e+07 1.0 2.9e+02 5.1e+03 1.2e+01 3 4 1 2 1 3 4 1 2 1 6101 VecDot 3 1.0 1.0376e-02 3.1 2.47e+06 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 3793 VecMDot 13 1.0 6.2595e-02 1.8 2.88e+07 1.0 0.0e+00 0.0e+00 1.3e+01 1 3 0 0 1 1 3 0 0 1 7335 VecNorm 23 1.0 3.6779e-02 1.3 1.89e+07 1.0 0.0e+00 0.0e+00 2.3e+01 1 2 0 0 3 1 2 0 0 3 8203 VecScale 256 1.0 1.1858e-02 1.2 6.79e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 9091 VecCopy 9 1.0 1.9356e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecSet 358 1.0 3.4759e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecAXPY 3 1.0 5.6140e-03 1.2 2.47e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 7010 VecAYPX 80 1.0 3.4003e-02 1.5 8.77e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 4112 VecWAXPY 3 1.0 7.8418e-03 1.0 1.23e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2509 VecMAXPY 16 1.0 5.9146e-02 1.1 3.94e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 4 0 0 0 2 4 0 0 0 10645 VecPointwiseMult 15 1.0 1.0986e-03 1.3 4.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 5973 VecScatterBegin 575 1.0 3.7558e-02 1.2 0.00e+00 0.0 2.7e+04 2.0e+03 0.0e+00 1 0 91 59 0 1 0 91 59 0 0 VecScatterEnd 575 1.0 7.3482e-02 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 VecReduceArith 6 1.0 4.8752e-03 1.3 4.93e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 16144 VecReduceComm 3 1.0 1.1160e-03 4.7 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecNormalize 16 1.0 2.6518e-02 1.3 1.97e+07 1.0 0.0e+00 0.0e+00 1.6e+01 1 2 0 0 2 1 2 0 0 2 11872 MatMult 96 1.0 2.9586e-01 1.1 1.38e+08 1.0 4.6e+03 2.5e+03 0.0e+00 8 16 16 13 0 8 16 16 13 0 7443 MatMultAdd 80 1.0 8.8725e-02 1.1 3.94e+07 1.0 2.6e+03 7.3e+02 0.0e+00 2 4 9 2 0 2 4 9 2 0 7089 MatMultTranspose 100 1.0 1.1949e-01 1.2 4.93e+07 1.0 3.3e+03 7.3e+02 0.0e+00 3 6 11 3 0 3 6 11 3 0 6580 MatSolve 16 1.0 1.2011e-02 1.0 7.41e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 9871 MatSOR 160 1.0 1.8167e+00 1.0 4.83e+08 1.0 1.2e+04 2.0e+03 3.2e+02 50 55 39 25 35 50 55 39 25 35 4245 MatLUFactorSym 1 1.0 5.2259e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatLUFactorNum 3 1.0 3.1375e-02 1.3 3.00e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 3 0 0 0 1 3 0 0 0 15303 MatCopy 2 1.0 1.5712e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatConvert 1 1.0 3.2210e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatResidual 80 1.0 2.0021e-01 1.2 8.76e+07 1.0 3.8e+03 2.0e+03 0.0e+00 5 10 13 8 0 5 10 13 8 0 6981 MatAssemblyBegin 30 1.0 6.1281e-02 4.9 0.00e+00 0.0 0.0e+00 0.0e+00 5.8e+01 1 0 0 0 6 1 0 0 0 6 0 MatAssemblyEnd 30 1.0 1.0372e-01 1.1 0.00e+00 0.0 9.1e+02 3.4e+02 8.8e+01 3 0 3 0 10 3 0 3 0 10 0 MatGetRowIJ 1 1.0 2.3508e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetSubMatrice 3 1.0 1.9548e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatGetOrdering 1 1.0 2.1350e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 9 1.3 1.0579e-03 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 0 0 0 0 1 0 0 0 0 1 0 MatRedundantMat 3 1.0 2.4500e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0 KSPGMRESOrthog 13 1.0 1.0351e-01 1.4 5.75e+07 1.0 0.0e+00 0.0e+00 1.3e+01 3 7 0 0 1 3 7 0 0 1 8871 KSPSetUp 24 1.0 1.6666e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.8e+01 0 0 0 0 3 0 0 0 0 3 0 KSPSolve 3 1.0 2.8714e+00 1.0 8.36e+08 1.0 2.9e+04 3.0e+03 8.4e+02 79 95 97 97 92 79 95 97 97 92 4646 PCSetUp 3 1.0 4.0208e-01 1.0 4.03e+07 1.0 3.9e+03 9.8e+03 4.6e+02 11 5 13 42 50 11 5 13 42 50 1602 PCApply 16 1.0 2.2054e+00 1.0 6.57e+08 1.0 2.4e+04 1.9e+03 3.2e+02 60 75 82 52 35 60 75 82 52 35 4755 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage SNES 1 1 1332 0 SNESLineSearch 1 1 864 0 DMSNES 7 7 5048 0 Vector 306 306 260499416 0 Vector Scatter 30 30 11018432 0 Matrix 41 41 99826576 0 Distributed Mesh 13 13 64096 0 Star Forest Bipartite Graph 26 26 22144 0 Discrete System 13 13 11024 0 Index Set 65 65 5128548 0 IS L to G Mapping 12 12 4416000 0 Krylov Solver 8 8 27072 0 DMKSP interface 6 6 3888 0 Preconditioner 8 8 7816 0 Viewer 2 1 760 0 ======================================================================================================================== Average time to get PetscTime(): 0 Average time for MPI_Barrier(): 1.57356e-06 Average time for zero size MPI_Send(): 5.57303e-06 #PETSc Option Table entries: -da_grid_x 21 -da_grid_y 21 -da_refine 7 -ksp_monitor -ksp_rtol 1e-9 -log_summary -mg_levels_ksp_type richardson -pc_mg_levels 6 -pc_type mg -snes_monitor -snes_view #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4 Configure options: --prefix=/csc/softs/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real --with-debugging=0 --with-x=0 --with-cc=mpicc --with-fc=mpif90 --with-cxx=mpicxx --with-fortran --known-mpi-shared-libraries=1 --with-scalar-type=real --with-precision=double --CFLAGS="-g -O3 -mavx -mkl" --CXXFLAGS="-g -O3 -mavx -mkl" --FFLAGS="-g -O3 -mavx -mkl" ----------------------------------------- Libraries compiled on Mon Sep 28 20:22:47 2015 on helios85 Machine characteristics: Linux-2.6.32-573.1.1.el6.Bull.80.x86_64-x86_64-with-redhat-6.4-Santiago Using PETSc directory: /csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0 Using PETSc arch: arch-linux2-c-opt ----------------------------------------- Using C compiler: mpicc -g -O3 -mavx -mkl -fPIC ${COPTFLAGS} ${CFLAGS} Using Fortran compiler: mpif90 -g -O3 -mavx -mkl -fPIC ${FOPTFLAGS} ${FFLAGS} ----------------------------------------- Using include paths: -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/include -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/include -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/include -I/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/include -I/opt/mpi/bullxmpi/1.2.8.2/include ----------------------------------------- Using C linker: mpicc Using Fortran linker: mpif90 Using libraries: -Wl,-rpath,/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/lib -L/csc/releases/buildlog/anl/petsc-3.6.0/intel-15.0.0.090/bullxmpi-1.2.8.2/real/petsc-3.6.0/arch-linux2-c-opt/lib -lpetsc -lhwloc -lxml2 -lssl -lcrypto -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -lmpi_f90 -lmpi_f77 -lm -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -lmpi -lnuma -lrt -lnsl -lutil -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -limf -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/opt/mpi/bullxmpi/1.2.8.2/lib -L/opt/mpi/bullxmpi/1.2.8.2/lib -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.7 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -L/opt/intel/composer_xe_2015.0.090/mkl/lib/intel64 -ldl -----------------------------------------