[petsc-users] MPI Iterative solver crash on HPC
Sal Am
tempohoper at gmail.com
Fri Jan 11 02:41:05 CST 2019
Thank you Dave,
I reconfigured PETSc with valgrind and debugging mode, I ran the code again
with the following options:
mpiexec -n 8 valgrind --tool=memcheck -q --num-callers=20
--log-file=valgrind.log.%p ./solveCSys -malloc off -ksp_type bcgs -pc_type
gamg -log_view
(as on the petsc website you linked)
It finished solving using the iterative solver, but the resulting
valgrind.log.%p files (all 8 corresponding to each processor) are all
empty. And it took a whooping ~15hours, for what used to take ~10-20min.
Maybe this is because of valgrind? I am not sure. Attached is the log_view.
On Thu, Jan 10, 2019 at 8:59 AM Dave May <dave.mayhem23 at gmail.com> wrote:
>
>
> On Thu, 10 Jan 2019 at 08:55, Sal Am via petsc-users <
> petsc-users at mcs.anl.gov> wrote:
>
>> I am not sure what is exactly is wrong as the error changes slightly
>> every time I run it (without changing the parameters).
>>
>
> This likely implies that you have a memory error in your code (a memory
> leak would not cause this behaviour).
> I strongly suggest you make sure your code is free of memory errors.
> You can do this using valgrind. See here
>
> https://www.mcs.anl.gov/petsc/documentation/faq.html#valgrind
>
> for an explanation of how to use valgrind.
>
>
>> I have attached the first two run's errors and my code.
>>
>> Is there a memory leak somewhere? I have tried running it with
>> -malloc_dump, but not getting anything printed out, however, when run with
>> -log_view I see that Viewer is created 4 times, but destroyed 3 times. The
>> way I see it, I have destroyed it where I see I no longer have use for it
>> so not sure if I am wrong. Could this be the reason why it keeps crashing?
>> It crashes as soon as it reads the matrix, before entering the solving mode
>> (I have a print statement before solving starts that never prints).
>>
>> how I run it in the job script on 2 node with 32 processors using the
>> clusters OpenMPI.
>>
>> mpiexec ./solveCSys -ksp_type bcgs -pc_type gamg -ksp_converged_reason
>> -ksp_monitor_true_residual -log_view -ksp_error_if_not_converged
>> -ksp_monitor -malloc_log -ksp_view
>>
>> the matrix:
>> 2 122 821 366 (non-zero elements)
>> 25 947 279 x 25 947 279
>>
>> Thanks and all the best
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190111/3777c6da/attachment-0001.html>
-------------- next part --------------
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option. #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
./solveCSys on a linux-cumulus-debug named r02g03 with 8 processors, by vef002 Fri Jan 11 01:58:39 2019
Using Petsc Release Version 3.10.2, unknown
Max Max/Min Avg Total
Time (sec): 5.385e+04 1.000 5.385e+04
Objects: 2.880e+02 1.003 2.871e+02
Flop: 1.192e+12 1.425 9.458e+11 7.567e+12
Flop/sec: 2.214e+07 1.425 1.756e+07 1.405e+08
MPI Messages: 3.749e+05 1.927 3.001e+05 2.401e+06
MPI Message Lengths: 6.445e+09 3.656 1.605e+04 3.853e+10
MPI Reductions: 2.571e+05 1.000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flop
and VecAXPY() for complex vectors of length N --> 8N flop
Summary of Stages: ----- Time ------ ----- Flop ------ --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total Count %Total Avg %Total Count %Total
0: Main Stage: 5.3850e+04 100.0% 7.5667e+12 100.0% 2.401e+06 100.0% 1.605e+04 100.0% 2.571e+05 100.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flop: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
AvgLen: average message length (bytes)
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flop in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option. #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
Event Count Time (sec) Flop --- Global --- --- Stage ---- Total
Max Ratio Max Ratio Max Ratio Mess AvgLen Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
BuildTwoSided 2 1.0 1.0046e-02 1.0 0.00e+00 0.0 4.1e+01 4.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
BuildTwoSidedF 23 1.0 8.2332e+01237.1 0.00e+00 0.0 2.0e+02 4.5e+06 0.0e+00 0 0 0 2 0 0 0 0 2 0 0
VecView 1 1.0 1.7768e-01 1.0 0.00e+00 0.0 7.0e+00 2.8e+05 1.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecDot 4736 1.0 1.5488e+01 1.0 6.61e+08 1.0 0.0e+00 0.0e+00 9.5e+03 0 0 0 0 4 0 0 0 0 4 341
VecDotNorm2 2368 1.0 1.3977e+01 1.0 6.61e+08 1.0 0.0e+00 0.0e+00 4.7e+03 0 0 0 0 2 0 0 0 0 2 378
VecMDot 40 1.0 6.6729e+0011.1 1.54e+07 1.0 0.0e+00 0.0e+00 8.0e+01 0 0 0 0 0 0 0 0 0 0 18
VecNorm 2413 1.0 8.1465e+00 1.0 3.34e+08 1.0 0.0e+00 0.0e+00 4.8e+03 0 0 0 0 2 0 0 0 0 2 328
VecScale 44 1.0 4.3048e-02 1.0 1.54e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 287
VecCopy 9480 1.0 1.7119e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 42668 1.0 2.5871e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 4 1.0 9.8097e-03 1.0 2.81e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 229
VecAYPX 75792 1.0 5.5978e+01 1.0 3.32e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 475
VecAXPBYCZ 42632 1.0 1.5645e+02 1.0 7.97e+09 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 407
VecWAXPY 4736 1.0 1.3468e+01 1.0 6.61e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 393
VecMAXPY 44 1.0 3.4021e-01 1.0 1.82e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 429
VecAssemblyBegin 10 1.0 2.0956e-01 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 0 0 0 0 0 0 0
VecAssemblyEnd 10 1.0 1.2929e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecPointwiseMult 22 1.0 3.1633e-02 1.0 7.72e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 195
VecLoad 1 1.0 3.6724e-01 1.0 0.00e+00 0.0 7.0e+00 2.8e+05 1.1e+01 0 0 0 0 0 0 0 0 0 0 0
VecScatterBegin 80580 1.0 2.8643e+01 2.3 0.00e+00 0.0 2.4e+06 1.6e+04 0.0e+00 0 0100 97 0 0 0100 97 0 0
VecScatterEnd 80580 1.0 6.2659e+03270.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0
VecSetRandom 2 1.0 1.7043e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 44 1.0 4.3032e-01 3.1 4.63e+06 1.0 0.0e+00 0.0e+00 8.8e+01 0 0 0 0 0 0 0 0 0 0 86
MatMult 61620 1.0 2.9162e+04 1.3 6.48e+11 1.4 2.1e+06 1.8e+04 0.0e+00 46 56 87 96 0 46 56 87 96 0 145
MatMultAdd 9474 1.0 1.5750e+02 1.2 3.28e+09 1.3 1.6e+05 4.4e+02 0.0e+00 0 0 7 0 0 0 0 7 0 0 152
MatMultTranspose 9474 1.0 1.1072e+02 1.1 3.28e+09 1.3 1.6e+05 4.4e+02 9.5e+03 0 0 7 0 4 0 0 7 0 4 216
MatSolve 4737 0.0 2.3544e+00 0.0 2.62e+07 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 11
MatSOR 56866 1.0 2.3703e+04 1.4 5.21e+11 1.6 0.0e+00 0.0e+00 0.0e+00 35 42 0 0 0 35 42 0 0 0 133
MatLUFactorSym 1 1.0 2.6810e-02 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLUFactorNum 1 1.0 4.8278e-02 8.7 4.67e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1
MatConvert 2 1.0 1.2898e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatScale 6 1.0 7.0816e-01 1.2 2.02e+07 1.3 7.0e+01 1.6e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 188
MatResidual 9474 1.0 4.1742e+03 1.0 9.29e+10 1.3 3.3e+05 1.6e+04 9.5e+03 8 8 14 14 4 8 8 14 14 4 145
MatAssemblyBegin 47 1.0 1.4260e+02 1.4 0.00e+00 0.0 2.0e+02 4.5e+06 2.8e+01 0 0 0 2 0 0 0 0 2 0 0
MatAssemblyEnd 47 1.0 6.1052e+01 1.1 0.00e+00 0.0 6.9e+02 1.8e+03 2.8e+02 0 0 0 0 0 0 0 0 0 0 0
MatGetRowIJ 1 0.0 8.9049e-03 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatCreateSubMat 2 1.0 1.2997e-01 1.0 0.00e+00 0.0 4.9e+01 2.9e+02 9.0e+01 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 0.0 4.1870e-02 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatCoarsen 2 1.0 8.3524e-01 1.1 0.00e+00 0.0 7.6e+02 8.7e+03 2.4e+01 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 3 1.0 3.3310e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLoad 1 1.0 2.3003e+01 1.0 0.00e+00 0.0 6.1e+01 4.4e+06 4.1e+01 0 0 0 1 0 0 0 0 1 0 0
MatAXPY 2 1.0 5.8209e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatTranspose 4 1.0 4.3042e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatMatMult 2 1.0 4.1555e+00 1.0 1.96e+07 1.3 4.3e+02 6.5e+03 6.6e+01 0 0 0 0 0 0 0 0 0 0 31
MatMatMultSym 2 1.0 2.1865e+00 1.0 0.00e+00 0.0 3.6e+02 4.7e+03 6.2e+01 0 0 0 0 0 0 0 0 0 0 0
MatMatMultNum 2 1.0 1.9642e+00 1.0 1.96e+07 1.3 7.0e+01 1.6e+04 4.0e+00 0 0 0 0 0 0 0 0 0 0 65
MatPtAP 2 1.0 7.6737e+00 1.0 1.08e+08 1.3 7.6e+02 1.2e+04 8.0e+01 0 0 0 0 0 0 0 0 0 0 102
MatPtAPSymbolic 2 1.0 3.8572e+00 1.0 0.00e+00 0.0 4.4e+02 1.8e+04 3.0e+01 0 0 0 0 0 0 0 0 0 0 0
MatPtAPNumeric 2 1.0 3.8076e+00 1.0 1.08e+08 1.3 3.2e+02 3.3e+03 5.0e+01 0 0 0 0 0 0 0 0 0 0 206
MatTrnMatMult 1 1.0 3.2146e+02 1.0 3.78e+09 1.6 1.5e+02 6.7e+06 4.2e+01 1 0 0 3 0 1 0 0 3 0 69
MatTrnMatMultSym 1 1.0 7.1323e+01 1.0 0.00e+00 0.0 6.0e+01 2.5e+06 1.7e+01 0 0 0 0 0 0 0 0 0 0 0
MatTrnMatMultNum 1 1.0 2.5014e+02 1.0 3.78e+09 1.6 9.4e+01 9.4e+06 2.5e+01 0 0 0 2 0 0 0 0 2 0 89
MatGetLocalMat 7 1.0 5.9865e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetBrAoCol 6 1.0 4.1407e-01 1.3 0.00e+00 0.0 4.9e+02 2.2e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatTranspose_SeqAIJ_FAST 4 1.0 4.0492e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSetUp 9 1.0 5.9289e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.5e+01 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 1 1.0 5.3465e+04 1.0 1.19e+12 1.4 2.4e+06 1.6e+04 2.6e+05 99100100 96100 99100100 96100 141
KSPGMRESOrthog 40 1.0 7.0008e+00 7.5 3.09e+07 1.0 0.0e+00 0.0e+00 3.0e+02 0 0 0 0 0 0 0 0 0 0 35
PCGAMGGraph_AGG 2 1.0 1.3118e+01 1.0 1.96e+07 1.3 2.1e+02 6.7e+03 1.0e+02 0 0 0 0 0 0 0 0 0 0 10
PCGAMGCoarse_AGG 2 1.0 3.2407e+02 1.0 3.78e+09 1.6 1.1e+03 9.9e+05 8.4e+01 1 0 0 3 0 1 0 0 3 0 68
PCGAMGProl_AGG 2 1.0 8.8382e-01 1.8 0.00e+00 0.0 2.9e+02 2.1e+04 9.8e+01 0 0 0 0 0 0 0 0 0 0 0
PCGAMGPOpt_AGG 2 1.0 1.4734e+01 1.0 2.36e+08 1.3 1.1e+03 1.2e+04 3.8e+02 0 0 0 0 0 0 0 0 0 0 107
GAMG: createProl 2 1.0 3.5259e+02 1.0 4.04e+09 1.6 2.7e+03 4.0e+05 6.6e+02 1 0 0 3 0 1 0 0 3 0 68
Graph 4 1.0 1.3072e+01 1.0 1.96e+07 1.3 2.1e+02 6.7e+03 1.0e+02 0 0 0 0 0 0 0 0 0 0 10
MIS/Agg 2 1.0 8.6046e-01 1.1 0.00e+00 0.0 7.6e+02 8.7e+03 2.4e+01 0 0 0 0 0 0 0 0 0 0 0
SA: col data 2 1.0 2.2417e-01 1.0 0.00e+00 0.0 1.6e+02 3.2e+04 2.8e+01 0 0 0 0 0 0 0 0 0 0 0
SA: frmProl0 2 1.0 2.3013e-01 1.0 0.00e+00 0.0 1.3e+02 5.6e+03 5.0e+01 0 0 0 0 0 0 0 0 0 0 0
SA: smooth 2 1.0 4.8317e+00 1.0 2.02e+07 1.3 4.3e+02 6.5e+03 8.8e+01 0 0 0 0 0 0 0 0 0 0 28
GAMG: partLevel 2 1.0 7.9183e+00 1.0 1.08e+08 1.3 8.3e+02 1.1e+04 2.1e+02 0 0 0 0 0 0 0 0 0 0 99
repartition 1 1.0 1.6927e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.3e+01 0 0 0 0 0 0 0 0 0 0 0
Invert-Sort 1 1.0 4.2589e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 7.0e+00 0 0 0 0 0 0 0 0 0 0 0
Move A 1 1.0 9.4407e-02 1.0 0.00e+00 0.0 3.5e+01 4.0e+02 4.7e+01 0 0 0 0 0 0 0 0 0 0 0
Move P 1 1.0 5.1312e-02 1.0 0.00e+00 0.0 1.4e+01 4.0e+01 4.9e+01 0 0 0 0 0 0 0 0 0 0 0
PCSetUp 2 1.0 3.6084e+02 1.0 4.14e+09 1.6 3.5e+03 3.1e+05 9.8e+02 1 0 0 3 0 1 0 0 3 0 68
PCSetUpOnBlocks 4737 1.0 3.5424e-01 1.4 4.67e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
PCApply 4737 1.0 5.0190e+04 1.0 1.09e+12 1.4 2.3e+06 1.4e+04 2.2e+05 93 91 96 83 85 93 91 96 83 85 138
SFSetGraph 2 1.0 8.9202e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFSetUp 2 1.0 3.6647e-01 1.1 0.00e+00 0.0 1.2e+02 5.4e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastBegin 16 1.0 4.8668e-02 1.6 0.00e+00 0.0 6.4e+02 9.3e+03 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SFBcastEnd 16 1.0 1.1840e-01 2.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Viewer 4 3 2544 0.
Vector 135 135 16274000 0.
Matrix 68 68 674831928 0.
Matrix Coarsen 2 2 1288 0.
Index Set 42 42 117744 0.
Vec Scatter 15 15 20168 0.
Krylov Solver 9 9 235736 0.
Preconditioner 7 7 7236 0.
PetscRandom 4 4 2680 0.
Star Forest Graph 2 2 1760 0.
========================================================================================================================
Average time to get PetscTime(): 7.14209e-05
Average time for MPI_Barrier(): 0.000216455
Average time for zero size MPI_Send(): 0.000925563
#PETSc Option Table entries:
-ksp_type bcgs
-log_view
-malloc off
-pc_type gamg
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4
Configure options: PETSC_ARCH=linux-cumulus-debug --with-cc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc --with-fc=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpifort --with-cxx=/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicxx --download-parmetis --download-metis --download-ptscotch --download-superlu_dist --download-mumps --with-scalar-type=complex --with-debugging=yes --download-scalapack --download-superlu --download-fblaslapack=1 --download-cmake
-----------------------------------------
Libraries compiled on 2019-01-10 10:35:56 on r02g03
Machine characteristics: Linux-3.10.0-514.2.2.el7.x86_64-x86_64-with-centos-7.3.1611-Core
Using PETSc directory: /home/vef002/petsc
Using PETSc arch: linux-cumulus-debug
-----------------------------------------
Using C compiler: /usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc -fPIC -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -fstack-protector -fvisibility=hidden -g3
Using Fortran compiler: /usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpifort -fPIC -Wall -ffree-line-length-0 -Wno-unused-dummy-argument -g
-----------------------------------------
Using include paths: -I/home/vef002/petsc/include -I/home/vef002/petsc/linux-cumulus-debug/include
-----------------------------------------
Using C linker: /usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpicc
Using Fortran linker: /usr/local/depot/openmpi-3.1.1-gcc-7.3.0/bin/mpifort
Using libraries: -Wl,-rpath,/home/vef002/petsc/linux-cumulus-debug/lib -L/home/vef002/petsc/linux-cumulus-debug/lib -lpetsc -Wl,-rpath,/home/vef002/petsc/linux-cumulus-debug/lib -L/home/vef002/petsc/linux-cumulus-debug/lib -Wl,-rpath,/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/lib -L/usr/local/depot/openmpi-3.1.1-gcc-7.3.0/lib -Wl,-rpath,/usr/local/depot/gcc-7.3.0/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -L/usr/local/depot/gcc-7.3.0/lib/gcc/x86_64-pc-linux-gnu/7.3.0 -Wl,-rpath,/usr/local/depot/gcc-7.3.0/lib64 -L/usr/local/depot/gcc-7.3.0/lib64 -Wl,-rpath,/usr/local/depot/gcc-7.3.0/lib -L/usr/local/depot/gcc-7.3.0/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu -lsuperlu_dist -lflapack -lfblas -lparmetis -lmetis -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lm -lX11 -lstdc++ -ldl -lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lmpi -lgfortran -lm -lgfortran -lm -lgcc_s -lquadmath -lpthread -lrt -lm -lpthread -lz -lstdc++ -ldl
-----------------------------------------
##########################################################
# #
# WARNING!!! #
# #
# This code was compiled with a debugging option. #
# To get timing results run ./configure #
# using --with-debugging=no, the performance will #
# be generally two or three times faster. #
# #
##########################################################
More information about the petsc-users
mailing list