[petsc-users] Scaling with number of cores

TAY wee-beng zonexo at gmail.com
Sun Nov 1 07:30:50 CST 2015


On 1/11/2015 10:00 AM, Barry Smith wrote:
>> On Oct 31, 2015, at 8:43 PM, TAY wee-beng <zonexo at gmail.com> wrote:
>>
>>
>> On 1/11/2015 12:47 AM, Matthew Knepley wrote:
>>> On Sat, Oct 31, 2015 at 11:34 AM, TAY wee-beng <zonexo at gmail.com> wrote:
>>> Hi,
>>>
>>> I understand that as mentioned in the faq, due to the limitations in memory, the scaling is not linear. So, I am trying to write a proposal to use a supercomputer.
>>> Its specs are:
>>> Compute nodes: 82,944 nodes (SPARC64 VIIIfx; 16GB of memory per node)
>>>
>>> 8 cores / processor
>>> Interconnect: Tofu (6-dimensional mesh/torus) Interconnect
>>> Each cabinet contains 96 computing nodes,
>>> One of the requirement is to give the performance of my current code with my current set of data, and there is a formula to calculate the estimated parallel efficiency when using the new large set of data
>>> There are 2 ways to give performance:
>>> 1. Strong scaling, which is defined as how the elapsed time varies with the number of processors for a fixed
>>> problem.
>>> 2. Weak scaling, which is defined as how the elapsed time varies with the number of processors for a
>>> fixed problem size per processor.
>>> I ran my cases with 48 and 96 cores with my current cluster, giving 140 and 90 mins respectively. This is classified as strong scaling.
>>> Cluster specs:
>>> CPU: AMD 6234 2.4GHz
>>> 8 cores / processor (CPU)
>>> 6 CPU / node
>>> So 48 Cores / CPU
>>> Not sure abt the memory / node
>>>
>>> The parallel efficiency ‘En’ for a given degree of parallelism ‘n’ indicates how much the program is
>>> efficiently accelerated by parallel processing. ‘En’ is given by the following formulae. Although their
>>> derivation processes are different depending on strong and weak scaling, derived formulae are the
>>> same.
>>>  From the estimated time, my parallel efficiency using  Amdahl's law on the current old cluster was 52.7%.
>>> So is my results acceptable?
>>> For the large data set, if using 2205 nodes (2205X8cores), my expected parallel efficiency is only 0.5%. The proposal recommends value of > 50%.
>>> The problem with this analysis is that the estimated serial fraction from Amdahl's Law  changes as a function
>>> of problem size, so you cannot take the strong scaling from one problem and apply it to another without a
>>> model of this dependence.
>>>
>>> Weak scaling does model changes with problem size, so I would measure weak scaling on your current
>>> cluster, and extrapolate to the big machine. I realize that this does not make sense for many scientific
>>> applications, but neither does requiring a certain parallel efficiency.
>> Ok I check the results for my weak scaling it is even worse for the expected parallel efficiency. From the formula used, it's obvious it's doing some sort of exponential extrapolation decrease. So unless I can achieve a near > 90% speed up when I double the cores and problem size for my current 48/96 cores setup,     extrapolating from about 96 nodes to 10,000 nodes will give a much lower expected parallel efficiency for the new case.
>>
>> However, it's mentioned in the FAQ that due to memory requirement, it's impossible to get >90% speed when I double the cores and problem size (ie linear increase in performance), which means that I can't get >90% speed up when I double the cores and problem size for my current 48/96 cores setup. Is that so?
>    What is the output of -ksp_view -log_summary on the problem and then on the problem doubled in size and number of processors?
>
>    Barry
Hi,

I have attached the output

48 cores: log48
96 cores: log96

There are 2 solvers - The momentum linear eqn uses bcgs, while the 
Poisson eqn uses hypre BoomerAMG.

Problem size doubled from 158x266x150 to 158x266x300.
>
>> So is it fair to say that the main problem does not lie in my programming skills, but rather the way the linear equations are solved?
>>
>> Thanks.
>>>    Thanks,
>>>
>>>       Matt
>>> Is it possible for this type of scaling in PETSc (>50%), when using 17640 (2205X8) cores?
>>> Btw, I do not have access to the system.
>>>
>>>
>>>
>>> Sent using CloudMagic Email
>>>
>>>
>>>
>>> -- 
>>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>>> -- Norbert Wiener

-------------- next part --------------
  0.000000000000000E+000  0.353000000000000       0.000000000000000E+000
   90.0000000000000       0.000000000000000E+000  0.000000000000000E+000
   1.00000000000000       0.400000000000000                0     -400000
 AB,AA,BB   -2.47900002275128        2.50750002410496     
   3.46600006963126        3.40250006661518     
 size_x,size_y,size_z          158         266         150
 body_cg_ini  0.523700833348298       0.778648765134454     
   7.03282656467989     
 Warning - length difference between element and cell
 max_element_length,min_element_length,min_delta
  0.000000000000000E+000   10000000000.0000       1.800000000000000E-002
 maximum ngh_surfaces and ngh_vertics are           45          22
 minimum ngh_surfaces and ngh_vertics are           28          10
 body_cg_ini  0.896813342835977      -0.976707581163755     
   7.03282656467989     
 Warning - length difference between element and cell
 max_element_length,min_element_length,min_delta
  0.000000000000000E+000   10000000000.0000       1.800000000000000E-002
 maximum ngh_surfaces and ngh_vertics are           45          22
 minimum ngh_surfaces and ngh_vertics are           28          10
 min IIB_cell_no           0
 max IIB_cell_no         429
 final initial IIB_cell_no        2145
 min I_cell_no           0
 max I_cell_no         460
 final initial I_cell_no        2300
 size(IIB_cell_u),size(I_cell_u),size(IIB_equal_cell_u),size(I_equal_cell_u)
        2145        2300        2145        2300
 IIB_I_cell_no_uvw_total1        3090        3094        3078        3080
        3074        3073
 IIB_I_cell_no_uvw_total2        3102        3108        3089        3077
        3060        3086
KSP Object:(poisson_) 48 MPI processes
  type: gmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 48 MPI processes
  type: hypre
    HYPRE BoomerAMG preconditioning
    HYPRE BoomerAMG: Cycle type V
    HYPRE BoomerAMG: Maximum number of levels 25
    HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
    HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
    HYPRE BoomerAMG: Threshold for strong coupling 0.25
    HYPRE BoomerAMG: Interpolation truncation factor 0
    HYPRE BoomerAMG: Interpolation: max elements per row 0
    HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
    HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
    HYPRE BoomerAMG: Maximum row sums 0.9
    HYPRE BoomerAMG: Sweeps down         1
    HYPRE BoomerAMG: Sweeps up           1
    HYPRE BoomerAMG: Sweeps on coarse    1
    HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
    HYPRE BoomerAMG: Relax weight  (all)      1
    HYPRE BoomerAMG: Outer relax weight (all) 1
    HYPRE BoomerAMG: Using CF-relaxation
    HYPRE BoomerAMG: Measure type        local
    HYPRE BoomerAMG: Coarsen type        Falgout
    HYPRE BoomerAMG: Interpolation type  classical
  linear system matrix = precond matrix:
  Mat Object:   48 MPI processes
    type: mpiaij
    rows=6304200, cols=6304200
    total: nonzeros=4.39181e+07, allocated nonzeros=8.82588e+07
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
    1      0.00150000      0.26453420      0.26150786      1.18591549 -0.76698473E+03 -0.32601079E+02  0.62972429E+07
KSP Object:(momentum_) 48 MPI processes
  type: bcgs
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object:(momentum_) 48 MPI processes
  type: bjacobi
    block Jacobi: number of blocks = 48
    Local solve is same for all blocks, in the following KSP and PC objects:
  KSP Object:  (momentum_sub_)   1 MPI processes
    type: preonly
    maximum iterations=10000, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
    left preconditioning
    using NONE norm type for convergence test
  PC Object:  (momentum_sub_)   1 MPI processes
    type: ilu
      ILU: out-of-place factorization
      0 levels of fill
      tolerance for zero pivot 2.22045e-14
      matrix ordering: natural
      factor fill ratio given 1, needed 1
        Factored matrix follows:
          Mat Object:           1 MPI processes
            type: seqaij
            rows=504336, cols=504336
            package used to perform factorization: petsc
            total: nonzeros=3.24201e+06, allocated nonzeros=3.24201e+06
            total number of mallocs used during MatSetValues calls =0
              not using I-node routines
    linear system matrix = precond matrix:
    Mat Object:     1 MPI processes
      type: seqaij
      rows=504336, cols=504336
      total: nonzeros=3.24201e+06, allocated nonzeros=3.53035e+06
      total number of mallocs used during MatSetValues calls =0
        not using I-node routines
  linear system matrix = precond matrix:
  Mat Object:   48 MPI processes
    type: mpiaij
    rows=18912600, cols=18912600
    total: nonzeros=1.30008e+08, allocated nonzeros=2.64776e+08
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
KSP Object:(poisson_) 48 MPI processes
  type: gmres
    GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 48 MPI processes
  type: hypre
    HYPRE BoomerAMG preconditioning
    HYPRE BoomerAMG: Cycle type V
    HYPRE BoomerAMG: Maximum number of levels 25
    HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
    HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
    HYPRE BoomerAMG: Threshold for strong coupling 0.25
    HYPRE BoomerAMG: Interpolation truncation factor 0
    HYPRE BoomerAMG: Interpolation: max elements per row 0
    HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
    HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
    HYPRE BoomerAMG: Maximum row sums 0.9
    HYPRE BoomerAMG: Sweeps down         1
    HYPRE BoomerAMG: Sweeps up           1
    HYPRE BoomerAMG: Sweeps on coarse    1
    HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
    HYPRE BoomerAMG: Relax weight  (all)      1
    HYPRE BoomerAMG: Outer relax weight (all) 1
    HYPRE BoomerAMG: Using CF-relaxation
    HYPRE BoomerAMG: Measure type        local
    HYPRE BoomerAMG: Coarsen type        Falgout
    HYPRE BoomerAMG: Interpolation type  classical
  linear system matrix = precond matrix:
  Mat Object:   48 MPI processes
    type: mpiaij
    rows=6304200, cols=6304200
    total: nonzeros=4.39181e+07, allocated nonzeros=8.82588e+07
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
    2      0.00150000      0.32176572      0.32263580      1.26788415 -0.60299667E+03  0.32647398E+02  0.62967217E+07
 body 1
 implicit forces and moment 1
  -4.26275429376489        3.25233748148304        4.24465168045839     
   2.64197203573494       -4.33941463724848        6.02314872588369     
 body 2
 implicit forces and moment 2
  -2.73305586295329       -4.58125536984606        4.09749596192292     
 -0.631365801453371       -4.97859179904208       -6.05233907360457     
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./a.out on a petsc-3.6.2_shared_rel named n12-09 with 48 processors, by wtay Sun Nov  1 14:22:24 2015
Using Petsc Release Version 3.6.2, Oct, 02, 2015 

                         Max       Max/Min        Avg      Total 
Time (sec):           5.216e+01      1.00078   5.213e+01
Objects:              5.700e+01      1.00000   5.700e+01
Flops:                1.187e+08      1.51264   9.189e+07  4.411e+09
Flops/sec:            2.278e+06      1.51282   1.763e+06  8.461e+07
MPI Messages:         6.500e+01      3.42105   3.721e+01  1.786e+03
MPI Message Lengths:  1.338e+07      2.00000   3.521e+05  6.288e+08
MPI Reductions:       8.900e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 5.2132e+01 100.0%  4.4107e+09 100.0%  1.786e+03 100.0%  3.521e+05      100.0%  8.800e+01  98.9% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

MatMult               14 1.0 3.3337e-01 1.6 3.91e+07 1.5 1.3e+03 4.3e+05 0.0e+00  0 33 74 90  0   0 33 74 90  0  4381
MatSolve               3 1.0 6.6930e-02 1.3 1.79e+07 1.9 0.0e+00 0.0e+00 0.0e+00  0 15  0  0  0   0 15  0  0  0  9759
MatLUFactorNum         1 1.0 9.9076e-02 1.9 9.58e+06 2.0 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   0  8  0  0  0  3458
MatILUFactorSym        1 1.0 8.4120e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             1 1.0 5.4024e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyBegin       2 1.0 2.2867e-0141.2 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  4   0  0  0  0  5     0
MatAssemblyEnd         2 1.0 3.0022e-01 1.0 0.00e+00 0.0 3.8e+02 1.7e+05 1.6e+01  1  0 21 10 18   1  0 21 10 18     0
MatGetRowIJ            3 1.0 1.0967e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 1.0795e-02 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatView                5 1.7 6.9060e-03 4.1 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00  0  0  0  0  3   0  0  0  0  3     0
KSPGMRESOrthog        12 1.0 6.7062e-0110.1 2.82e+07 1.3 0.0e+00 0.0e+00 1.2e+01  0 24  0  0 13   0 24  0  0 14  1579
KSPSetUp               3 1.0 4.4972e-02 4.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               3 1.0 4.0408e+01 1.0 1.19e+08 1.5 1.3e+03 4.3e+05 3.5e+01 78100 74 90 39  78100 74 90 40   109
VecDot                 2 1.0 2.5249e-02 9.1 2.02e+06 1.3 0.0e+00 0.0e+00 2.0e+00  0  2  0  0  2   0  2  0  0  2  2996
VecDotNorm2            1 1.0 1.9680e-0211.9 2.02e+06 1.3 0.0e+00 0.0e+00 1.0e+00  0  2  0  0  1   0  2  0  0  1  3844
VecMDot               12 1.0 6.4181e-0119.8 1.41e+07 1.3 0.0e+00 0.0e+00 1.2e+01  0 12  0  0 13   0 12  0  0 14   825
VecNorm               16 1.0 1.9977e-0115.8 6.72e+06 1.3 0.0e+00 0.0e+00 1.6e+01  0  6  0  0 18   0  6  0  0 18  1262
VecScale              14 1.0 4.8060e-03 1.6 2.35e+06 1.3 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   0  2  0  0  0 18364
VecCopy                4 1.0 1.2854e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                26 1.0 3.7126e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                2 1.0 2.4660e-03 3.0 6.72e+05 1.3 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0 10226
VecAXPBYCZ             2 1.0 1.3788e-02 1.5 4.03e+06 1.3 0.0e+00 0.0e+00 0.0e+00  0  3  0  0  0   0  3  0  0  0 10973
VecWAXPY               2 1.0 1.3360e-02 1.4 2.02e+06 1.3 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   0  2  0  0  0  5662
VecMAXPY              14 1.0 4.5768e-02 1.3 1.82e+07 1.3 0.0e+00 0.0e+00 0.0e+00  0 15  0  0  0   0 15  0  0  0 14876
VecAssemblyBegin       6 1.0 2.8086e-02 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01  0  0  0  0 20   0  0  0  0 20     0
VecAssemblyEnd         6 1.0 3.3140e-05 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin       14 1.0 1.6444e-02 3.0 0.00e+00 0.0 1.3e+03 4.3e+05 0.0e+00  0  0 74 90  0   0  0 74 90  0     0
VecScatterEnd         14 1.0 1.1537e-01 3.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize          14 1.0 1.1996e-0111.2 7.06e+06 1.3 0.0e+00 0.0e+00 1.4e+01  0  6  0  0 16   0  6  0  0 16  2207
PCSetUp                3 1.0 3.2445e+01 1.0 9.58e+06 2.0 0.0e+00 0.0e+00 4.0e+00 62  8  0  0  4  62  8  0  0  5    11
PCSetUpOnBlocks        1 1.0 1.9210e-01 1.7 9.58e+06 2.0 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   0  8  0  0  0  1783
PCApply               17 1.0 7.5148e+00 1.1 1.79e+07 1.9 0.0e+00 0.0e+00 0.0e+00 14 15  0  0  0  14 15  0  0  0    87
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Matrix     7              7    182147036     0
       Krylov Solver     3              3        20664     0
              Vector    33             33     59213792     0
      Vector Scatter     2              2         2176     0
           Index Set     7              7      4705612     0
      Preconditioner     3              3         3208     0
              Viewer     2              1          760     0
========================================================================================================================
Average time to get PetscTime(): 1.90735e-07
Average time for MPI_Barrier(): 1.12057e-05
Average time for zero size MPI_Send(): 2.15024e-05
#PETSc Option Table entries:
-log_summary
-momentum_ksp_view
-poisson_ksp_view
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-mpi-dir=/opt/ud/openmpi-1.8.8/ --with-blas-lapack-dir=/opt/ud/intel_xe_2013sp1/mkl/lib/intel64/ --with-debugging=0 --download-hypre=1 --prefix=/home/wtay/Lib/petsc-3.6.2_shared_rel --known-mpi-shared=1 --with-shared-libraries --with-fortran-interfaces=1
-----------------------------------------
Libraries compiled on Sun Oct 18 17:34:07 2015 on hpc12 
Machine characteristics: Linux-3.10.0-123.20.1.el7.x86_64-x86_64-with-centos-7.1.1503-Core
Using PETSc directory: /home/wtay/Codes/petsc-3.6.2
Using PETSc arch: petsc-3.6.2_shared_rel
-----------------------------------------

Using C compiler: /opt/ud/openmpi-1.8.8/bin/mpicc  -fPIC -wd1572 -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /opt/ud/openmpi-1.8.8/bin/mpif90  -fPIC -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include -I/home/wtay/Codes/petsc-3.6.2/include -I/home/wtay/Codes/petsc-3.6.2/include -I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include -I/home/wtay/Lib/petsc-3.6.2_shared_rel/include -I/opt/ud/openmpi-1.8.8/include
-----------------------------------------

Using C linker: /opt/ud/openmpi-1.8.8/bin/mpicc
Using Fortran linker: /opt/ud/openmpi-1.8.8/bin/mpif90
Using libraries: -Wl,-rpath,/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib -L/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib -lpetsc -Wl,-rpath,/home/wtay/Lib/petsc-3.6.2_shared_rel/lib -L/home/wtay/Lib/petsc-3.6.2_shared_rel/lib -lHYPRE -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -lmpi_cxx -Wl,-rpath,/opt/ud/intel_xe_2013sp1/mkl/lib/intel64 -L/opt/ud/intel_xe_2013sp1/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lhwloc -lssl -lcrypto -lmpi_usempi -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -lmpi -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -ldl 
-----------------------------------------
-------------- next part --------------
  0.000000000000000E+000  0.353000000000000       0.000000000000000E+000
   90.0000000000000       0.000000000000000E+000  0.000000000000000E+000
   1.00000000000000       0.400000000000000                0     -400000
 AB,AA,BB   -3.41000006697141        3.44100006844383     
   3.46600006963126        3.40250006661518     
 size_x,size_y,size_z          158         266         301
 body_cg_ini  0.523700833348298       0.778648765134454     
   7.03282656467989     
 Warning - length difference between element and cell
 max_element_length,min_element_length,min_delta
  0.000000000000000E+000   10000000000.0000       1.800000000000000E-002
 maximum ngh_surfaces and ngh_vertics are           41          20
 minimum ngh_surfaces and ngh_vertics are           28          10
 body_cg_ini  0.896813342835977      -0.976707581163755     
   7.03282656467989     
 Warning - length difference between element and cell
 max_element_length,min_element_length,min_delta
  0.000000000000000E+000   10000000000.0000       1.800000000000000E-002
 maximum ngh_surfaces and ngh_vertics are           41          20
 minimum ngh_surfaces and ngh_vertics are           28          10
 min IIB_cell_no           0
 max IIB_cell_no         415
 final initial IIB_cell_no        2075
 min I_cell_no           0
 max I_cell_no         468
 final initial I_cell_no        2340
 size(IIB_cell_u),size(I_cell_u),size(IIB_equal_cell_u),size(I_equal_cell_u)
        2075        2340        2075        2340
 IIB_I_cell_no_uvw_total1        7635        7644        7643        8279
        8271        8297
 IIB_I_cell_no_uvw_total2        7647        7646        7643        8271
        8274        8266
KSP Object:(poisson_) 96 MPI processes
  type: richardson
    Richardson: damping factor=1
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 96 MPI processes
  type: hypre
    HYPRE BoomerAMG preconditioning
    HYPRE BoomerAMG: Cycle type V
    HYPRE BoomerAMG: Maximum number of levels 25
    HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
    HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
    HYPRE BoomerAMG: Threshold for strong coupling 0.25
    HYPRE BoomerAMG: Interpolation truncation factor 0
    HYPRE BoomerAMG: Interpolation: max elements per row 0
    HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
    HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
    HYPRE BoomerAMG: Maximum row sums 0.9
    HYPRE BoomerAMG: Sweeps down         1
    HYPRE BoomerAMG: Sweeps up           1
    HYPRE BoomerAMG: Sweeps on coarse    1
    HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
    HYPRE BoomerAMG: Relax weight  (all)      1
    HYPRE BoomerAMG: Outer relax weight (all) 1
    HYPRE BoomerAMG: Using CF-relaxation
    HYPRE BoomerAMG: Measure type        local
    HYPRE BoomerAMG: Coarsen type        Falgout
    HYPRE BoomerAMG: Interpolation type  classical
  linear system matrix = precond matrix:
  Mat Object:   96 MPI processes
    type: mpiaij
    rows=12650428, cols=12650428
    total: nonzeros=8.82137e+07, allocated nonzeros=1.77106e+08
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
    1      0.00150000      0.35826998      0.36414728      1.27156134 -0.24352631E+04 -0.99308685E+02  0.12633660E+08
KSP Object:(momentum_) 96 MPI processes
  type: bcgs
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object:(momentum_) 96 MPI processes
  type: bjacobi
    block Jacobi: number of blocks = 96
    Local solve is same for all blocks, in the following KSP and PC objects:
  KSP Object:  (momentum_sub_)   1 MPI processes
    type: preonly
    maximum iterations=10000, initial guess is zero
    tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
    left preconditioning
    using NONE norm type for convergence test
  PC Object:  (momentum_sub_)   1 MPI processes
    type: ilu
      ILU: out-of-place factorization
      0 levels of fill
      tolerance for zero pivot 2.22045e-14
      matrix ordering: natural
      factor fill ratio given 1, needed 1
        Factored matrix follows:
          Mat Object:           1 MPI processes
            type: seqaij
            rows=504336, cols=504336
            package used to perform factorization: petsc
            total: nonzeros=3.24201e+06, allocated nonzeros=3.24201e+06
            total number of mallocs used during MatSetValues calls =0
              not using I-node routines
    linear system matrix = precond matrix:
    Mat Object:     1 MPI processes
      type: seqaij
      rows=504336, cols=504336
      total: nonzeros=3.24201e+06, allocated nonzeros=3.53035e+06
      total number of mallocs used during MatSetValues calls =0
        not using I-node routines
  linear system matrix = precond matrix:
  Mat Object:   96 MPI processes
    type: mpiaij
    rows=37951284, cols=37951284
    total: nonzeros=2.61758e+08, allocated nonzeros=5.31318e+08
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
KSP Object:(poisson_) 96 MPI processes
  type: richardson
    Richardson: damping factor=1
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
  left preconditioning
  using PRECONDITIONED norm type for convergence test
PC Object:(poisson_) 96 MPI processes
  type: hypre
    HYPRE BoomerAMG preconditioning
    HYPRE BoomerAMG: Cycle type V
    HYPRE BoomerAMG: Maximum number of levels 25
    HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
    HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
    HYPRE BoomerAMG: Threshold for strong coupling 0.25
    HYPRE BoomerAMG: Interpolation truncation factor 0
    HYPRE BoomerAMG: Interpolation: max elements per row 0
    HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
    HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
    HYPRE BoomerAMG: Maximum row sums 0.9
    HYPRE BoomerAMG: Sweeps down         1
    HYPRE BoomerAMG: Sweeps up           1
    HYPRE BoomerAMG: Sweeps on coarse    1
    HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax on coarse     Gaussian-elimination
    HYPRE BoomerAMG: Relax weight  (all)      1
    HYPRE BoomerAMG: Outer relax weight (all) 1
    HYPRE BoomerAMG: Using CF-relaxation
    HYPRE BoomerAMG: Measure type        local
    HYPRE BoomerAMG: Coarsen type        Falgout
    HYPRE BoomerAMG: Interpolation type  classical
  linear system matrix = precond matrix:
  Mat Object:   96 MPI processes
    type: mpiaij
    rows=12650428, cols=12650428
    total: nonzeros=8.82137e+07, allocated nonzeros=1.77106e+08
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
    2      0.00150000      0.49306841      0.48961181      1.45614703 -0.20361132E+04  0.77916035E+02  0.12632159E+08
 body 1
 implicit forces and moment 1
  -4.19326176380900        3.26285229643405        4.91657786206150     
   3.33023211607813       -4.66288821809535        5.95105697339790     
 body 2
 implicit forces and moment 2
  -2.71360610740664       -4.53650746988691        4.76048497342022     
  -1.13560954211517       -5.55259427154780       -5.98958778241742     
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./a.out on a petsc-3.6.2_shared_rel named n12-05 with 96 processors, by wtay Sun Nov  1 14:26:28 2015
Using Petsc Release Version 3.6.2, Oct, 02, 2015 

                         Max       Max/Min        Avg      Total 
Time (sec):           5.132e+02      1.00006   5.132e+02
Objects:              4.400e+01      1.00000   4.400e+01
Flops:                5.257e+07      1.75932   4.049e+07  3.887e+09
Flops/sec:            1.024e+05      1.75933   7.890e+04  7.574e+06
MPI Messages:         1.010e+02     14.42857   1.385e+01  1.330e+03
MPI Message Lengths:  5.310e+06      2.00000   3.793e+05  5.045e+08
MPI Reductions:       6.300e+01      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 5.1317e+02 100.0%  3.8869e+09 100.0%  1.330e+03 100.0%  3.793e+05      100.0%  6.200e+01  98.4% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

MatMult                2 1.0 1.1735e-01 1.6 1.30e+07 1.9 3.8e+02 9.9e+05 0.0e+00  0 25 29 75  0   0 25 29 75  0  8276
MatSolve               3 1.0 7.3929e-02 1.7 1.79e+07 1.9 0.0e+00 0.0e+00 0.0e+00  0 34  0  0  0   0 34  0  0  0 17787
MatLUFactorNum         1 1.0 1.0028e-01 1.9 9.58e+06 2.0 0.0e+00 0.0e+00 0.0e+00  0 18  0  0  0   0 18  0  0  0  6881
MatILUFactorSym        1 1.0 8.3889e-02 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatConvert             1 1.0 6.0471e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyBegin       2 1.0 4.2017e-0132.8 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  6   0  0  0  0  6     0
MatAssemblyEnd         2 1.0 3.2434e-01 1.0 0.00e+00 0.0 7.6e+02 1.7e+05 1.6e+01  0  0 57 25 25   0  0 57 25 26     0
MatGetRowIJ            3 1.0 1.3113e-0513.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 1.6168e-02 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatView                5 1.7 1.0472e-02 3.8 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00  0  0  0  0  5   0  0  0  0  5     0
KSPSetUp               3 1.0 5.0210e-02 7.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve               3 1.0 4.9085e+02 1.0 5.26e+07 1.8 3.8e+02 9.9e+05 9.0e+00 96100 29 75 14  96100 29 75 15     8
VecDot                 2 1.0 3.2738e-0210.5 2.02e+06 1.3 0.0e+00 0.0e+00 2.0e+00  0  4  0  0  3   0  4  0  0  3  4637
VecDotNorm2            1 1.0 3.0590e-0215.2 2.02e+06 1.3 0.0e+00 0.0e+00 1.0e+00  0  4  0  0  2   0  4  0  0  2  4963
VecNorm                2 1.0 1.2529e-0120.6 2.02e+06 1.3 0.0e+00 0.0e+00 2.0e+00  0  4  0  0  3   0  4  0  0  3  1212
VecCopy                2 1.0 5.5149e-03 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet                10 1.0 2.0353e-02 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPBYCZ             2 1.0 1.8868e-02 2.1 4.03e+06 1.3 0.0e+00 0.0e+00 0.0e+00  0  8  0  0  0   0  8  0  0  0 16091
VecWAXPY               2 1.0 1.4920e-02 1.6 2.02e+06 1.3 0.0e+00 0.0e+00 0.0e+00  0  4  0  0  0   0  4  0  0  0 10174
VecAssemblyBegin       6 1.0 2.2800e-0118.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01  0  0  0  0 29   0  0  0  0 29     0
VecAssemblyEnd         6 1.0 5.3167e-05 6.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin        2 1.0 1.0023e-02 4.1 0.00e+00 0.0 3.8e+02 9.9e+05 0.0e+00  0  0 29 75  0   0  0 29 75  0     0
VecScatterEnd          2 1.0 4.0989e-02 3.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
PCSetUp                3 1.0 4.3599e+02 1.0 9.58e+06 2.0 0.0e+00 0.0e+00 4.0e+00 85 18  0  0  6  85 18  0  0  6     2
PCSetUpOnBlocks        1 1.0 1.9143e-01 1.9 9.58e+06 2.0 0.0e+00 0.0e+00 0.0e+00  0 18  0  0  0   0 18  0  0  0  3605
PCApply                3 1.0 7.6561e-02 1.6 1.79e+07 1.9 0.0e+00 0.0e+00 0.0e+00  0 34  0  0  0   0 34  0  0  0 17175
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Matrix     7              7    182147036     0
       Krylov Solver     3              3         3464     0
              Vector    20             20     41709448     0
      Vector Scatter     2              2         2176     0
           Index Set     7              7      4705612     0
      Preconditioner     3              3         3208     0
              Viewer     2              1          760     0
========================================================================================================================
Average time to get PetscTime(): 1.90735e-07
Average time for MPI_Barrier(): 0.0004426
Average time for zero size MPI_Send(): 1.45609e-05
#PETSc Option Table entries:
-log_summary
-momentum_ksp_view
-poisson_ksp_view
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-mpi-dir=/opt/ud/openmpi-1.8.8/ --with-blas-lapack-dir=/opt/ud/intel_xe_2013sp1/mkl/lib/intel64/ --with-debugging=0 --download-hypre=1 --prefix=/home/wtay/Lib/petsc-3.6.2_shared_rel --known-mpi-shared=1 --with-shared-libraries --with-fortran-interfaces=1
-----------------------------------------
Libraries compiled on Sun Oct 18 17:34:07 2015 on hpc12 
Machine characteristics: Linux-3.10.0-123.20.1.el7.x86_64-x86_64-with-centos-7.1.1503-Core
Using PETSc directory: /home/wtay/Codes/petsc-3.6.2
Using PETSc arch: petsc-3.6.2_shared_rel
-----------------------------------------

Using C compiler: /opt/ud/openmpi-1.8.8/bin/mpicc  -fPIC -wd1572 -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /opt/ud/openmpi-1.8.8/bin/mpif90  -fPIC -O3   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include -I/home/wtay/Codes/petsc-3.6.2/include -I/home/wtay/Codes/petsc-3.6.2/include -I/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/include -I/home/wtay/Lib/petsc-3.6.2_shared_rel/include -I/opt/ud/openmpi-1.8.8/include
-----------------------------------------

Using C linker: /opt/ud/openmpi-1.8.8/bin/mpicc
Using Fortran linker: /opt/ud/openmpi-1.8.8/bin/mpif90
Using libraries: -Wl,-rpath,/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib -L/home/wtay/Codes/petsc-3.6.2/petsc-3.6.2_shared_rel/lib -lpetsc -Wl,-rpath,/home/wtay/Lib/petsc-3.6.2_shared_rel/lib -L/home/wtay/Lib/petsc-3.6.2_shared_rel/lib -lHYPRE -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -lmpi_cxx -Wl,-rpath,/opt/ud/intel_xe_2013sp1/mkl/lib/intel64 -L/opt/ud/intel_xe_2013sp1/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lhwloc -lssl -lcrypto -lmpi_usempi -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -lmpi -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/opt/ud/openmpi-1.8.8/lib -L/opt/ud/openmpi-1.8.8/lib -Wl,-rpath,/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -L/opt/ud/intel_xe_2013sp1/composer_xe_2013_sp1.2.144/compiler/lib/intel64 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -L/usr/lib/gcc/x86_64-redhat-linux/4.8.3 -ldl 
-----------------------------------------


More information about the petsc-users mailing list