[petsc-users] Guidance on GAMG preconditioning
Justin Chang
jychang48 at gmail.com
Thu Jun 4 16:47:06 CDT 2015
Thank you Matt and Mark for the clarification. Matt, if you recall our
discussion about calculating the arithmetic intensity from the earlier
threads, it seems GAMG now has a myriad of all these additional vector and
matrix operations that were not present in the CG/Jacobi case. Running with
the command line options you and Mark suggested, I now have these
additional operations to deal with:
VecMDot
VecAXPBYCZ
VecMAXPY
VecSetRandom
VecNormalize
MatMultAdd
MatMultTranspose
MatSolve
MatConvert
MatScale
MatResidual
MatCoarsen
MatAXPY
MatMatMult
MatMatMultSym
MatMatMultNum
MatPtAP
MatPtAPSymbolic
PatPtAPNumeric
MatTrnMatMult
MatTrnMatMultSym
MatTrnMatMultNum
MatGetSymTrans
KSPGMRESOrthog
PCGAMGGraph_AGG
PCGAMGCoarse_AGG
PCGAMGProl_AGG
PCGAMGPOpt_AGG
GAMG: createProl and all of its associated events.
GAMG: partLevel
PCSetUpOnBlocks
Attached is the output from -log_summary showing the exact counts for the
case I am running.
I have the following questions:
1) For the Vec operations VecMDot and VecMAXPY, it seems the estimation of
total bytes transferred (TBT) relies on knowing how many vectors there are.
Is there a way to figure this out? Or at least with gamg what would it be,
three vectors?
2) It seems there are a lot of matrix manipulations and multiplications. Is
it safe to say that the size and number of non zeroes is the same? Or will
it change?
3) If I follow the TBT tabulation as in that paper you pointed me to, would
MatMultTranspose follow the same formula if the Jacobian is symmetric?
4) How do I calculate anything that requires the multiplication of at least
two matrices?
5) More importantly, are any of the above calculations necessary? Because
log_summary seems to indicate that MatMult() has the greatest amount of
workload and number of calls. My only hesitation is how much traffic
MatMatMults may take (assuming I go off of the same assumptions as in that
paper).
6) And/or, are there any other functions that I missed that might be
important to calculate as well?
Thanks,
Justin
On Thu, Jun 4, 2015 at 11:33 AM, Mark Adams <mfadams at lbl.gov> wrote:
>
>
> On Thu, Jun 4, 2015 at 12:29 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
>
>> On Thu, Jun 4, 2015 at 10:31 AM, Justin Chang <jychang48 at gmail.com>
>> wrote:
>>
>>> Yeah I saw his recommendation and am trying it out. But I am not sure
>>> what most of those parameters mean. For instance:
>>>
>>> 1) What does -pc_gamg_agg_nsmooths refer to?
>>>
>>
>> This is always 1 (its the definition of smoothed aggregation). Mark
>> allows 0 to support unsmoothed aggregation, which may be
>> better for easy problems on extremely large machines.
>>
>>
>>> 2) Does increase in the threshold of -pc_gamg_threshold translate to
>>> making the coarsening faster?
>>>
>>
>> Yes, I believe so (easy to check).
>>
>
> Other way around.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150604/1979063a/attachment-0001.html>
-------------- next part --------------
==========================================
1 processors:
==========================================
TSTEP ANALYSIS TIME ITER FLOPS/s
Linear solve converged due to CONVERGED_RTOL iterations 31
1 2.313901e+00 31 3.629168e+08
==========================================
Time summary:
==========================================
Creating DMPlex: 0.212745s
Distributing DMPlex: 0.000274897s
Refining DMPlex: 1.1645s
Setting up problem: 0.960611s
Overall analysis time: 2.39205s
Overall FLOPS/s: 2.60206e+08
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
./main on a arch-linux2-c-opt named compute-0-18.local with 1 processor, by jchang23 Thu Jun 4 16:26:23 2015
Using Petsc Development GIT revision: v3.5.4-3996-gc7ab56a GIT Date: 2015-06-04 06:26:21 -0500
Max Max/Min Avg Total
Time (sec): 4.735e+00 1.00000 4.735e+00
Objects: 5.320e+02 1.00000 5.320e+02
Flops: 8.524e+08 1.00000 8.524e+08 8.524e+08
Flops/sec: 1.800e+08 1.00000 1.800e+08 1.800e+08
MPI Messages: 5.500e+00 1.00000 5.500e+00 5.500e+00
MPI Message Lengths: 2.218e+06 1.00000 4.032e+05 2.218e+06
MPI Reductions: 1.000e+00 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 4.7345e+00 100.0% 8.5241e+08 100.0% 5.500e+00 100.0% 4.032e+05 100.0% 1.000e+00 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
CreateMesh 1 1.0 1.3775e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 29 0 0 0 0 29 0 0 0 0 0
BuildTwoSided 5 1.0 2.0537e-03 1.0 0.00e+00 0.0 5.0e-01 4.0e+00 0.0e+00 0 0 9 0 0 0 0 9 0 0 0
VecView 1 1.0 1.3811e-02 1.0 3.62e+05 1.0 1.0e+00 4.9e+05 0.0e+00 0 0 18 22 0 0 0 18 22 0 26
VecMDot 112 1.0 2.6209e-03 1.0 1.43e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 5464
VecTDot 62 1.0 3.2454e-03 1.0 7.64e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2354
VecNorm 184 1.0 2.8832e-03 1.0 6.81e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2362
VecScale 152 1.0 5.1141e-04 1.0 1.43e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2804
VecCopy 171 1.0 1.4133e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 658 1.0 4.2362e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0
VecAXPY 106 1.0 2.6708e-03 1.0 8.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 3007
VecAYPX 1054 1.0 1.1199e-02 1.0 2.45e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 2190
VecAXPBYCZ 512 1.0 6.9575e-03 1.0 4.17e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 5 0 0 0 0 5 0 0 0 5987
VecWAXPY 1 1.0 1.0610e-04 1.0 6.16e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 581
VecMAXPY 152 1.0 2.9120e-03 1.0 1.69e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 5813
VecAssemblyBegin 4 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 4 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecPointwiseMult 856 1.0 1.0922e-02 1.0 1.39e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 1275
VecSetRandom 4 1.0 6.6662e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 152 1.0 1.7796e-03 1.0 4.30e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 2417
MatMult 915 1.0 3.2228e-01 1.0 5.27e+08 1.0 0.0e+00 0.0e+00 0.0e+00 7 62 0 0 0 7 62 0 0 0 1637
MatMultAdd 128 1.0 2.0048e-02 1.0 2.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 1028
MatMultTranspose 128 1.0 2.2771e-02 1.0 2.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 0 2 0 0 0 906
MatSolve 64 1.0 9.5367e-05 1.0 1.13e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1188
MatLUFactorSym 1 1.0 3.5048e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLUFactorNum 1 1.0 2.0981e-05 1.0 1.76e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 837
MatConvert 4 1.0 1.0548e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatScale 12 1.0 2.8198e-03 1.0 2.94e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1043
MatResidual 128 1.0 4.4002e-02 1.0 7.35e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 9 0 0 0 1 9 0 0 0 1670
MatAssemblyBegin 33 1.0 5.7220e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 33 1.0 1.8984e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetRow 260352 1.0 1.4122e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 1.0 5.0068e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 4.1962e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatCoarsen 4 1.0 3.9771e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 1 1.0 1.3030e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAXPY 4 1.0 1.1571e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatMatMult 4 1.0 2.1907e-02 1.0 2.62e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 120
MatMatMultSym 4 1.0 1.5245e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatMatMultNum 4 1.0 6.6392e-03 1.0 2.62e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 394
MatPtAP 4 1.0 2.0476e-01 1.0 4.64e+07 1.0 0.0e+00 0.0e+00 0.0e+00 4 5 0 0 0 4 5 0 0 0 227
MatPtAPSymbolic 4 1.0 7.4399e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 0
MatPtAPNumeric 4 1.0 1.3035e-01 1.0 4.64e+07 1.0 0.0e+00 0.0e+00 0.0e+00 3 5 0 0 0 3 5 0 0 0 356
MatTrnMatMult 1 1.0 3.2565e-01 1.0 2.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 7 2 0 0 0 7 2 0 0 0 63
MatTrnMatMultSym 1 1.0 1.7446e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0
MatTrnMatMultNum 1 1.0 1.5119e-01 1.0 2.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 3 2 0 0 0 3 2 0 0 0 136
MatGetSymTrans 5 1.0 5.9624e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
DMPlexInterp 3 1.0 1.8401e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 4 0 0 0 0 4 0 0 0 0 0
DMPlexStratify 11 1.0 3.1623e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 7 0 0 0 0 7 0 0 0 0 0
DMPlexPrealloc 1 1.0 5.1204e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 11 0 0 0 0 11 0 0 0 0 0
DMPlexResidualFE 1 1.0 3.5016e-01 1.0 2.09e+07 1.0 0.0e+00 0.0e+00 0.0e+00 7 2 0 0 0 7 2 0 0 0 60
DMPlexJacobianFE 1 1.0 8.6178e-01 1.0 4.22e+07 1.0 0.0e+00 0.0e+00 0.0e+00 18 5 0 0 0 18 5 0 0 0 49
SFSetGraph 6 1.0 1.0118e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastBegin 9 1.0 3.4070e-03 1.0 0.00e+00 0.0 4.5e+00 3.8e+05 0.0e+00 0 0 82 78 0 0 0 82 78 0 0
SFBcastEnd 9 1.0 4.9067e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFReduceBegin 1 1.0 2.2292e-04 1.0 0.00e+00 0.0 1.0e+00 4.9e+05 0.0e+00 0 0 18 22 0 0 0 18 22 0 0
SFReduceEnd 1 1.0 1.5783e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SNESFunctionEval 1 1.0 3.5272e-01 1.0 2.09e+07 1.0 2.0e+00 4.9e+05 0.0e+00 7 2 36 44 0 7 2 36 44 0 59
SNESJacobianEval 1 1.0 8.6338e-01 1.0 4.22e+07 1.0 2.5e+00 3.0e+05 0.0e+00 18 5 45 33 0 18 5 45 33 0 49
KSPGMRESOrthog 112 1.0 5.1789e-03 1.0 2.86e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 3 0 0 0 0 3 0 0 0 5531
KSPSetUp 15 1.0 3.0289e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 1 1.0 1.0976e+00 1.0 7.77e+08 1.0 0.0e+00 0.0e+00 0.0e+00 23 91 0 0 0 23 91 0 0 0 708
PCGAMGGraph_AGG 4 1.0 7.4403e-02 1.0 2.30e+06 1.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 31
PCGAMGCoarse_AGG 4 1.0 3.3554e-01 1.0 2.06e+07 1.0 0.0e+00 0.0e+00 0.0e+00 7 2 0 0 0 7 2 0 0 0 61
PCGAMGProl_AGG 4 1.0 7.1132e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
PCGAMGPOpt_AGG 4 1.0 6.7413e-02 1.0 4.42e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 1 5 0 0 0 656
GAMG: createProl 4 1.0 4.8507e-01 1.0 6.71e+07 1.0 0.0e+00 0.0e+00 0.0e+00 10 8 0 0 0 10 8 0 0 0 138
Graph 8 1.0 7.4155e-02 1.0 2.30e+06 1.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 2 0 0 0 0 31
MIS/Agg 4 1.0 4.0429e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SA: col data 4 1.0 1.5974e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SA: frmProl0 4 1.0 6.3415e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SA: smooth 4 1.0 6.7411e-02 1.0 4.42e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 5 0 0 0 1 5 0 0 0 656
GAMG: partLevel 4 1.0 2.0478e-01 1.0 4.64e+07 1.0 0.0e+00 0.0e+00 0.0e+00 4 5 0 0 0 4 5 0 0 0 227
PCSetUp 2 1.0 6.9165e-01 1.0 1.14e+08 1.0 0.0e+00 0.0e+00 0.0e+00 15 13 0 0 0 15 13 0 0 0 164
PCSetUpOnBlocks 32 1.0 1.4734e-04 1.0 1.76e+04 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 119
PCApply 32 1.0 3.6208e-01 1.0 5.88e+08 1.0 0.0e+00 0.0e+00 0.0e+00 8 69 0 0 0 8 69 0 0 0 1625
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Viewer 4 3 2264 0
Object 7 7 4032 0
Container 7 7 3976 0
Vector 207 207 129392344 0
Matrix 24 24 43743284 0
Matrix Coarsen 4 4 2512 0
Distributed Mesh 28 28 129704 0
GraphPartitioner 11 11 6644 0
Star Forest Bipartite Graph 60 60 48392 0
Discrete System 28 28 23744 0
Index Set 47 47 9592920 0
IS L to G Mapping 1 1 302332 0
Section 61 61 40504 0
SNES 1 1 1332 0
SNESLineSearch 1 1 864 0
DMSNES 1 1 664 0
Krylov Solver 15 15 267352 0
Preconditioner 15 15 14740 0
Linear Space 2 2 1280 0
Dual Space 2 2 1312 0
FE Space 2 2 1496 0
PetscRandom 4 4 2496 0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
#PETSc Option Table entries:
-al 1
-am 0
-at 0.001
-bcloc 0,1,0,1,0,0,0,1,0,1,1,1,0,0,0,1,0,1,1,1,0,1,0,1,0,1,0,0,0,1,0,1,1,1,0,1,0.45,0.55,0.45,0.55,0.45,0.55
-bcnum 7
-bcval 0,0,0,0,0,0,1
-dim 3
-dm_refine 1
-dt 0.001
-edges 3,3
-floc 0.25,0.75,0.25,0.75,0.25,0.75
-fnum 0
-ftime 0,99
-fval 1
-ksp_atol 1e-8
-ksp_converged_reason
-ksp_max_it 50000
-ksp_rtol 1e-8
-ksp_type cg
-log_summary
-lower 0,0
-mat_petscspace_order 0
-mesh cube_with_hole3_mesh.dat
-mg_levels_ksp_max_it 2
-mg_levels_ksp_type chebyshev
-mg_levels_pc_type jacobi
-mu 1
-nonneg 0
-numsteps 0
-options_left 0
-pc_gamg_agg_nsmooths 1
-pc_gamg_threshold 0.02
-pc_type gamg
-petscpartitioner_type parmetis
-progress 0
-simplex 1
-solution_petscspace_order 1
-tao_fatol 1e-8
-tao_frtol 1e-8
-tao_max_it 50000
-tao_type blmvm
-trans cube_with_hole3_trans.dat
-upper 1,1
-vtuname figure_cube_with_hole_3
-vtuprint 1
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --download-chaco --download-ctetgen --download-fblaslapack --download-hdf5 --download-metis --download-parmetis --download-triangle --with-cc=mpicc --with-cmake=cmake --with-cxx=mpicxx --with-debugging=0 --with-fc=mpif90 --with-mpiexec=mpiexec --with-valgrind=1 CFLAGS= COPTFLAGS=-O3 CXXFLAGS= CXXOPTFLAGS=-O3 FOPTFLAGS=-O3 PETSC_ARCH=arch-linux2-c-opt
-----------------------------------------
Libraries compiled on Thu Jun 4 06:27:39 2015 on compute-2-42.local
Machine characteristics: Linux-2.6.32-504.1.3.el6.x86_64-x86_64-with-redhat-6.6-Santiago
Using PETSc directory: /home/jchang23/petsc
Using PETSc arch: arch-linux2-c-opt
-----------------------------------------
Using C compiler: mpicc -fPIC -O3 ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: mpif90 -fPIC -Wall -Wno-unused-variable -ffree-line-length-0 -Wno-unused-dummy-argument -O3 ${FOPTFLAGS} ${FFLAGS}
-----------------------------------------
Using include paths: -I/home/jchang23/petsc/arch-linux2-c-opt/include -I/home/jchang23/petsc/include -I/home/jchang23/petsc/include -I/home/jchang23/petsc/arch-linux2-c-opt/include -I/share/apps/openmpi-1.8.3/include
-----------------------------------------
Using C linker: mpicc
Using Fortran linker: mpif90
Using libraries: -Wl,-rpath,/home/jchang23/petsc/arch-linux2-c-opt/lib -L/home/jchang23/petsc/arch-linux2-c-opt/lib -lpetsc -Wl,-rpath,/home/jchang23/petsc/arch-linux2-c-opt/lib -L/home/jchang23/petsc/arch-linux2-c-opt/lib -lflapack -lfblas -lparmetis -ltriangle -lmetis -lctetgen -lX11 -lhdf5hl_fortran -lhdf5_fortran -lhdf5_hl -lhdf5 -lchaco -Wl,-rpath,/share/apps/openmpi-1.8.3/lib -L/share/apps/openmpi-1.8.3/lib -Wl,-rpath,/share/apps/gcc-4.9.2/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -L/share/apps/gcc-4.9.2/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -Wl,-rpath,/share/apps/gcc-4.9.2/lib64 -L/share/apps/gcc-4.9.2/lib64 -Wl,-rpath,/share/apps/gcc-4.9.2/lib -L/share/apps/gcc-4.9.2/lib -lmpi_usempi -lmpi_mpifh -lgfortran -lm -lgfortran -lm -lquadmath -lm -lmpi_cxx -lstdc++ -Wl,-rpath,/share/apps/openmpi-1.8.3/lib -L/share/apps/openmpi-1.8.3/lib -Wl,-rpath,/share/apps/gcc-4.9.2/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -L/share/apps/gcc-4.9.2/lib/gcc/x86_64-unknown-linux-gnu/4.9.2 -Wl,-rpath,/share/apps/gcc-4.9.2/lib64 -L/share/apps/gcc-4.9.2/lib64 -Wl,-rpath,/share/apps/gcc-4.9.2/lib -L/share/apps/gcc-4.9.2/lib -ldl -Wl,-rpath,/share/apps/openmpi-1.8.3/lib -lmpi -lgcc_s -lpthread -ldl
-----------------------------------------
More information about the petsc-users
mailing list