<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=us-ascii">
</head>
<body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">
<br class="">
<div>
<blockquote type="cite" class="">
<div class="">On Jan 14, 2016, at 3:09 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:</div>
<br class="Apple-interchange-newline">
<div class="">
<blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<br class="Apple-interchange-newline">
On Jan 14, 2016, at 2:01 PM, Griffith, Boyce Eugene <<a href="mailto:boyceg@email.unc.edu" class="">boyceg@email.unc.edu</a>> wrote:<br class="">
<br class="">
<blockquote type="cite" class=""><br class="">
On Jan 14, 2016, at 2:24 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" class="">bsmith@mcs.anl.gov</a>> wrote:<br class="">
<br class="">
<br class="">
Matt is right, there is a lot of "missing" time from the output. Please send the output from -ksp_view so we can see exactly what solver is being used.<span class="Apple-converted-space"> </span><br class="">
<br class="">
>From the output we have:<br class="">
<br class="">
Nonlinear solver 78 % of the time (so your "setup code" outside of PETSC is taking about 22% of the time)<br class="">
Linear solver 77 % of the time (this is reasonable pretty much the entire cost of the nonlinear solve is the linear solve)<br class="">
Time to set up the preconditioner is 19% (10 + 9) <br class="">
Time of iteration in KSP 35 % (this is the sum of the vector operations and MatMult() and MatSolve())<br class="">
<br class="">
So 77 - (19 + 35) = 23 % unexplained time inside the linear solver (custom preconditioner???)<br class="">
<br class="">
Also getting the results with Instruments or HPCToolkit would be useful (so long as we don't need to install HPCTool ourselves to see the results).<br class="">
</blockquote>
<br class="">
Thanks, Barry (& Matt & Dave) --- This is a solver that is mixing some matrix-based stuff implemented using PETSc along with some matrix-free stuff that is built on top of SAMRAI. Amneet and I should take a look at performance off-list first.<br class="">
</blockquote>
<br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<span style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; float: none; display: inline !important;" class=""> Just
put an PetscLogEvent() in (or several) to track that part. Plus put an event or two outside the SNESSolve to track the outside PETSc setup time.</span><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<span style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; float: none; display: inline !important;" class=""> The
PETSc time looks reasonable at most I can only image any optimizations we could do bringing it down a small percentage.</span><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
</div>
</blockquote>
<div><br class="">
</div>
<div>Here is a bit more info about what we are trying to do:</div>
<div><br class="">
</div>
<div>This is a Vanka-type MG preconditioner for a Stokes-like system on a structured grid. (Currently just uniform grids, but hopefully soon with AMR.) For the smoother, we are using damped Richardson + ASM with relatively small block subdomains --- e.g., all
DOFs associated with 8x8 cells in 2D (~300 DOFs), or 8x8x8 in 3D (~2500 DOFs). Unfortunately, MG iteration counts really tank when using smaller subdomains.</div>
<div><br class="">
</div>
<div>I can't remember whether we have quantified this carefully, but PCASM seems to bog down with smaller subdomains. A question is whether there are different implementation choices that could make the case of "lots of little subdomains" run faster. But before
we get to that, Amneet and I should take a more careful look at overall solver performance.</div>
<div><br class="">
</div>
<div>(We are also starting to play around with PCFIELDSPLIT for this problem too, although we don't have many ideas about how to handle the Schur complement.)</div>
<div><br class="">
</div>
<div>Thanks,</div>
<div><br class="">
</div>
<div>-- Boyce</div>
<div><br class="">
</div>
<blockquote type="cite" class="">
<div class=""><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<span style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; float: none; display: inline !important;" class=""> Barry</span><br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<br style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<blockquote type="cite" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class="">
<br class="">
-- Boyce<br class="">
<br class="">
<blockquote type="cite" class=""><br class="">
<br class="">
Barry<br class="">
<br class="">
<blockquote type="cite" class="">On Jan 14, 2016, at 1:26 AM, Bhalla, Amneet Pal S <<a href="mailto:amneetb@live.unc.edu" class="">amneetb@live.unc.edu</a>> wrote:<br class="">
<br class="">
<br class="">
<br class="">
<blockquote type="cite" class="">On Jan 13, 2016, at 9:17 PM, Griffith, Boyce Eugene <<a href="mailto:boyceg@email.unc.edu" class="">boyceg@email.unc.edu</a>> wrote:<br class="">
<br class="">
I see one hot spot:<br class="">
</blockquote>
<br class="">
<br class="">
Here is with opt build<br class="">
<br class="">
************************************************************************************************************************<br class="">
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***<br class="">
************************************************************************************************************************<br class="">
<br class="">
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------<br class="">
<br class="">
./main2d on a linux-opt named aorta with 1 processor, by amneetb Thu Jan 14 02:24:43 2016<br class="">
Using Petsc Development GIT revision: v3.6.3-3098-ga3ecda2 GIT Date: 2016-01-13 21:30:26 -0600<br class="">
<br class="">
Max Max/Min Avg Total<span class="Apple-converted-space"> </span><br class="">
Time (sec): 1.018e+00 1.00000 1.018e+00<br class="">
Objects: 2.935e+03 1.00000 2.935e+03<br class="">
Flops: 4.957e+08 1.00000 4.957e+08 4.957e+08<br class="">
Flops/sec: 4.868e+08 1.00000 4.868e+08 4.868e+08<br class="">
MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00<br class="">
MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00<br class="">
MPI Reductions: 0.000e+00 0.00000<br class="">
<br class="">
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)<br class="">
e.g., VecAXPY() for real vectors of length N --> 2N flops<br class="">
and VecAXPY() for complex vectors of length N --> 8N flops<br class="">
<br class="">
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --<br class="">
Avg %Total Avg %Total counts %Total Avg %Total counts %Total<span class="Apple-converted-space"> </span><br class="">
0: Main Stage: 1.0183e+00 100.0% 4.9570e+08 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%<span class="Apple-converted-space"> </span><br class="">
<br class="">
------------------------------------------------------------------------------------------------------------------------<br class="">
See the 'Profiling' chapter of the users' manual for details on interpreting output.<br class="">
Phase summary info:<br class="">
Count: number of times phase was executed<br class="">
Time and Flops: Max - maximum over all processors<br class="">
Ratio - ratio of maximum to minimum over all processors<br class="">
Mess: number of messages sent<br class="">
Avg. len: average message length (bytes)<br class="">
Reduct: number of global reductions<br class="">
Global: entire computation<br class="">
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().<br class="">
%T - percent time in this phase %F - percent flops in this phase<br class="">
%M - percent messages in this phase %L - percent message lengths in this phase<br class="">
%R - percent reductions in this phase<br class="">
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)<br class="">
------------------------------------------------------------------------------------------------------------------------<br class="">
Event Count Time (sec) Flops --- Global --- --- Stage --- Total<br class="">
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s<br class="">
------------------------------------------------------------------------------------------------------------------------<br class="">
<br class="">
--- Event Stage 0: Main Stage<br class="">
<br class="">
VecDot 4 1.0 2.9564e-05 1.0 3.31e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1120<br class="">
VecDotNorm2 272 1.0 1.4565e-03 1.0 4.25e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2920<br class="">
VecMDot 624 1.0 8.4300e-03 1.0 5.29e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 627<br class="">
VecNorm 565 1.0 3.8033e-03 1.0 4.38e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1151<br class="">
VecScale 86 1.0 5.5480e-04 1.0 1.55e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 279<br class="">
VecCopy 28 1.0 5.2261e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
VecSet 14567 1.0 1.2443e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br class="">
VecAXPY 903 1.0 4.2996e-03 1.0 6.66e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 1550<br class="">
VecAYPX 225 1.0 1.2550e-03 1.0 8.55e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 681<br class="">
VecAXPBYCZ 42 1.0 1.7118e-04 1.0 3.45e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2014<br class="">
VecWAXPY 70 1.0 1.9503e-04 1.0 2.98e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1528<br class="">
VecMAXPY 641 1.0 1.1136e-02 1.0 5.29e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 1 0 0 0 475<br class="">
VecSwap 135 1.0 4.5896e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
VecAssemblyBegin 745 1.0 4.9477e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
VecAssemblyEnd 745 1.0 9.2411e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
VecScatterBegin 40831 1.0 3.4502e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0<br class="">
BuildTwoSidedF 738 1.0 2.6712e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
MatMult 513 1.0 9.1235e-02 1.0 7.75e+07 1.0 0.0e+00 0.0e+00 0.0e+00 9 16 0 0 0 9 16 0 0 0 849<br class="">
MatSolve 13568 1.0 2.3605e-01 1.0 3.45e+08 1.0 0.0e+00 0.0e+00 0.0e+00 23 70 0 0 0 23 70 0 0 0 1460<br class="">
MatLUFactorSym 84 1.0 3.7430e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0<br class="">
MatLUFactorNum 85 1.0 3.9623e-02 1.0 4.19e+07 1.0 0.0e+00 0.0e+00 0.0e+00 4 8 0 0 0 4 8 0 0 0 1058<br class="">
MatILUFactorSym 1 1.0 3.3617e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
MatScale 4 1.0 2.5511e-04 1.0 2.51e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 984<br class="">
MatAssemblyBegin 108 1.0 6.3658e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
MatAssemblyEnd 108 1.0 2.9490e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
MatGetRow 33120 1.0 2.0157e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0<br class="">
MatGetRowIJ 85 1.0 1.2145e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
MatGetSubMatrice 4 1.0 8.4379e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br class="">
MatGetOrdering 85 1.0 7.7887e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br class="">
MatAXPY 4 1.0 4.9596e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 5 0 0 0 0 5 0 0 0 0 0<br class="">
MatPtAP 4 1.0 4.4426e-02 1.0 4.99e+06 1.0 0.0e+00 0.0e+00 0.0e+00 4 1 0 0 0 4 1 0 0 0 112<br class="">
MatPtAPSymbolic 4 1.0 2.7664e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0<br class="">
MatPtAPNumeric 4 1.0 1.6732e-02 1.0 4.99e+06 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 2 1 0 0 0 298<br class="">
MatGetSymTrans 4 1.0 3.6621e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
KSPGMRESOrthog 16 1.0 9.7778e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br class="">
KSPSetUp 90 1.0 5.7650e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br class="">
KSPSolve 1 1.0 7.8831e-01 1.0 4.90e+08 1.0 0.0e+00 0.0e+00 0.0e+00 77 99 0 0 0 77 99 0 0 0 622<br class="">
PCSetUp 90 1.0 9.9725e-02 1.0 4.19e+07 1.0 0.0e+00 0.0e+00 0.0e+00 10 8 0 0 0 10 8 0 0 0 420<br class="">
PCSetUpOnBlocks 112 1.0 8.7547e-02 1.0 4.19e+07 1.0 0.0e+00 0.0e+00 0.0e+00 9 8 0 0 0 9 8 0 0 0 479<br class="">
PCApply 16 1.0 7.1952e-01 1.0 4.89e+08 1.0 0.0e+00 0.0e+00 0.0e+00 71 99 0 0 0 71 99 0 0 0 680<br class="">
SNESSolve 1 1.0 7.9225e-01 1.0 4.90e+08 1.0 0.0e+00 0.0e+00 0.0e+00 78 99 0 0 0 78 99 0 0 0 619<br class="">
SNESFunctionEval 2 1.0 3.2940e-03 1.0 4.68e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 14<br class="">
SNESJacobianEval 1 1.0 4.7255e-04 1.0 4.26e+03 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 9<br class="">
------------------------------------------------------------------------------------------------------------------------<br class="">
<br class="">
Memory usage is given in bytes:<br class="">
<br class="">
Object Type Creations Destructions Memory Descendants' Mem.<br class="">
Reports information only for process 0.<br class="">
<br class="">
--- Event Stage 0: Main Stage<br class="">
<br class="">
Vector 971 839 15573352 0.<br class="">
Vector Scatter 290 289 189584 0.<br class="">
Index Set 1171 823 951928 0.<br class="">
IS L to G Mapping 110 109 2156656 0.<br class="">
Application Order 6 6 99952 0.<br class="">
MatMFFD 1 1 776 0.<br class="">
Matrix 189 189 24083332 0.<br class="">
Matrix Null Space 4 4 2432 0.<br class="">
Krylov Solver 90 90 122720 0.<br class="">
DMKSP interface 1 1 648 0.<br class="">
Preconditioner 90 90 89872 0.<br class="">
SNES 1 1 1328 0.<br class="">
SNESLineSearch 1 1 984 0.<br class="">
DMSNES 1 1 664 0.<br class="">
Distributed Mesh 2 2 9168 0.<br class="">
Star Forest Bipartite Graph 4 4 3168 0.<br class="">
Discrete System 2 2 1712 0.<br class="">
Viewer 1 0 0 0.<br class="">
========================================================================================================================<br class="">
Average time to get PetscTime(): 9.53674e-07<br class="">
#PETSc Option Table entries:<br class="">
-ib_ksp_converged_reason<br class="">
-ib_ksp_monitor_true_residual<br class="">
-ib_snes_type ksponly<br class="">
-log_summary<br class="">
-stokes_ib_pc_level_0_sub_pc_factor_nonzeros_along_diagonal<br class="">
-stokes_ib_pc_level_0_sub_pc_type ilu<br class="">
-stokes_ib_pc_level_ksp_richardson_self_scale<br class="">
-stokes_ib_pc_level_ksp_type richardson<br class="">
-stokes_ib_pc_level_pc_asm_local_type additive<br class="">
-stokes_ib_pc_level_sub_pc_factor_nonzeros_along_diagonal<br class="">
-stokes_ib_pc_level_sub_pc_type lu<br class="">
#End of PETSc Option Table entries<br class="">
Compiled without FORTRAN kernels<br class="">
Compiled with full precision matrices (default)<br class="">
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4<br class="">
Configure options: --CC=mpicc --CXX=mpicxx --FC=mpif90 --with-default-arch=0 --PETSC_ARCH=linux-opt --with-debugging=0 --with-c++-support=1 --with-hypre=1 --download-hypre=1 --with-hdf5=yes --COPTFLAGS=-O3 --CXXOPTFLAGS=-O3 --FOPTFLAGS=-O3<br class="">
-----------------------------------------<br class="">
Libraries compiled on Thu Jan 14 01:29:56 2016 on aorta<span class="Apple-converted-space"> </span><br class="">
Machine characteristics: Linux-3.13.0-63-generic-x86_64-with-Ubuntu-14.04-trusty<br class="">
Using PETSc directory: /not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc<br class="">
Using PETSc arch: linux-opt<br class="">
-----------------------------------------<br class="">
<br class="">
Using C compiler: mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -Qunused-arguments -O3 ${COPTFLAGS} ${CFLAGS}<br class="">
Using Fortran compiler: mpif90 -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 ${FOPTFLAGS} ${FFLAGS}<span class="Apple-converted-space"> </span><br class="">
-----------------------------------------<br class="">
<br class="">
Using include paths: -I/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/linux-opt/include -I/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/include -I/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/include -I/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/linux-opt/include
-I/not_backed_up/softwares/MPICH/include<br class="">
-----------------------------------------<br class="">
<br class="">
Using C linker: mpicc<br class="">
Using Fortran linker: mpif90<br class="">
Using libraries: -Wl,-rpath,/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/linux-opt/lib -L/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/linux-opt/lib -lpetsc -Wl,-rpath,/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/linux-opt/lib -L/not_backed_up/amneetb/softwares/PETSc-BitBucket/PETSc/linux-opt/lib
-lHYPRE -Wl,-rpath,/not_backed_up/softwares/MPICH/lib -L/not_backed_up/softwares/MPICH/lib -Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.8 -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu
-L/lib/x86_64-linux-gnu -lmpicxx -lstdc++ -llapack -lblas -lpthread -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lX11 -lm -lmpifort -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpicxx -lstdc++ -Wl,-rpath,/not_backed_up/softwares/MPICH/lib -L/not_backed_up/softwares/MPICH/lib
-Wl,-rpath,/usr/lib/gcc/x86_64-linux-gnu/4.8 -L/usr/lib/gcc/x86_64-linux-gnu/4.8 -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu -Wl,-rpath,/lib/x86_64-linux-gnu -L/lib/x86_64-linux-gnu -Wl,-rpath,/usr/lib/x86_64-linux-gnu -L/usr/lib/x86_64-linux-gnu
-ldl -Wl,-rpath,/not_backed_up/softwares/MPICH/lib -lmpi -lgcc_s -ldl<span class="Apple-converted-space"> </span><br class="">
-----------------------------------------</blockquote>
</blockquote>
</blockquote>
</div>
</blockquote>
</div>
<br class="">
</body>
</html>