[petsc-users] Enquiry regarding log summary results
TAY wee-beng
zonexo at gmail.com
Tue Oct 2 16:55:50 CDT 2012
On 27/9/2012 1:44 PM, Matthew Knepley wrote:
> On Thu, Sep 27, 2012 at 3:49 AM, TAY wee-beng <zonexo at gmail.com
> <mailto:zonexo at gmail.com>> wrote:
>
> Hi,
>
> I'm doing a log summary for my 3d cfd code. I have some questions:
>
> 1. if I'm solving 3 linear equations using ksp, is the result
> given in the log summary the total of the 3 linear eqns'
> performance? How can I get the performance for each individual eqn?
>
>
> Use logging stages:
> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogStagePush.html
I retried using :
PetscLogStage stage
call PetscInitialize(PETSC_NULL_CHARACTER,ierr)
call PetscLogStageRegister("momentum",stage,ierr)
call PetscLogStagePush(stage,ierr)
/
//... 1st stage - solving momentum eqn//- run 1st logging/
call PetscLogStagePop(ierr)
call PetscLogStageRegister("poisson",stage,ierr)
call PetscLogStagePush(stage,ierr)
/... 2nd stage - solve poisson eqn//- run 2nd logging/
call PetscLogStagePop(ierr)
call PetscFinalize(ierr)
I have attached the log_summary results. Turns out that there are 3
stages - 0,1 (momentum),2 (poisson). Shouldn't I be getting 2 stages
instead? How can I correct it? Or it doesn't matter?
>
> 2. If I run my code for 10 time steps, does the log summary gives
> the total or avg performance/ratio?
>
>
> Total.
>
> 3. Besides PETSc, I'm also using HYPRE's native geometric MG
> (Struct) to solve my Cartesian's grid CFD poisson eqn. Is there
> any way I can use PETSc's log summary to get HYPRE's performance?
> If I use boomerAMG thru PETSc, can I get its performance?
>
>
> If you mean flops, only if you count them yourself and tell PETSc
> using
> http://www.mcs.anl.gov/petsc/petsc-dev/docs/manualpages/Profiling/PetscLogFlops.html
>
> This is the disadvantage of using packages that do not properly
> monitor things :)
>
> Matt
>
>
> --
> Yours sincerely,
>
> TAY wee-beng
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121002/a9c99442/attachment.html>
-------------- next part --------------
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
./a.out on a petsc-3.3-dev_shared_rel named n12-10 with 48 processors, by wtay Tue Oct 2 23:41:57 2012
Using Petsc Development HG revision: 9883b54053eca13dd473a4711adfd309d1436b6e HG Date: Sun Sep 30 22:42:36 2012 -0500
Max Max/Min Avg Total
Time (sec): 1.181e+02 1.01842 1.171e+02
Objects: 3.870e+02 1.00000 3.870e+02
Flops: 2.920e+09 1.25508 2.551e+09 1.224e+11
Flops/sec: 2.475e+07 1.24073 2.179e+07 1.046e+09
MPI Messages: 1.075e+04 3.04562 8.944e+03 4.293e+05
MPI Message Lengths: 2.917e+08 2.17986 3.148e+04 1.351e+10
MPI Reductions: 1.590e+03 1.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 8.2603e+01 70.5% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 2.000e+01 1.3%
1: poisson: 2.0226e+01 17.3% 6.7061e+10 54.8% 2.461e+05 57.3% 1.799e+04 57.1% 1.279e+03 80.4%
2: momentum: 1.4257e+01 12.2% 5.5380e+10 45.2% 1.832e+05 42.7% 1.349e+04 42.9% 2.900e+02 18.2%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %f - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
--- Event Stage 1: poisson
MatMult 889 1.0 9.2293e+00 1.3 9.02e+08 1.3 1.6e+05 3.5e+04 0.0e+00 7 30 37 42 0 40 54 65 73 0 3927
MatMultAdd 129 1.0 6.0007e-01 2.2 6.14e+07 1.5 1.5e+04 9.1e+03 0.0e+00 0 2 3 1 0 2 3 6 2 0 3673
MatMultTranspose 129 1.0 1.1698e+00 3.7 6.14e+07 1.5 1.5e+04 9.1e+03 0.0e+00 0 2 3 1 0 3 3 6 2 0 1884
MatSolve 86 0.0 1.3154e-02 0.0 9.23e+06 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 702
MatLUFactorSym 1 1.0 2.6720e-03167.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatLUFactorNum 1 1.0 8.1620e-031006.9 5.33e+06 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 653
MatConvert 3 1.0 4.8410e-02 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatScale 9 1.0 4.9452e-02 2.6 4.65e+06 1.4 5.8e+02 3.3e+04 0.0e+00 0 0 0 0 0 0 0 0 0 0 3655
MatAssemblyBegin 50 1.0 2.3006e+00 3.4 0.00e+00 0.0 1.6e+03 6.5e+03 5.4e+01 1 0 0 0 3 7 0 1 0 4 0
MatAssemblyEnd 50 1.0 6.2769e-01 1.2 0.00e+00 0.0 9.0e+03 4.3e+03 1.4e+02 1 0 2 0 9 3 0 4 1 11 0
MatGetRow 311990 1.2 1.7204e-01 2.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0
MatGetRowIJ 1 0.0 1.6093e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 0.0 5.0211e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 4.2e-02 0 0 0 0 0 0 0 0 0 0 0
MatCoarsen 3 1.0 6.2435e-01 1.1 0.00e+00 0.0 3.1e+04 1.8e+04 3.8e+02 1 0 7 4 24 3 0 13 7 30 0
MatAXPY 3 1.0 1.9019e-02 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatMatMult 3 1.0 4.3979e-01 1.0 3.24e+06 1.3 4.1e+03 1.4e+04 7.2e+01 0 0 1 0 5 2 0 2 1 6 294
MatMatMultSym 3 1.0 3.5735e-01 1.1 0.00e+00 0.0 3.5e+03 1.1e+04 6.6e+01 0 0 1 0 4 2 0 1 1 5 0
MatMatMultNum 3 1.0 9.2986e-02 1.2 3.24e+06 1.3 5.8e+02 3.3e+04 6.0e+00 0 0 0 0 0 0 0 0 0 0 1393
MatPtAP 3 1.0 1.1116e+00 1.0 1.13e+08 2.5 8.8e+03 2.6e+04 8.1e+01 1 2 2 2 5 5 4 4 3 6 2345
MatPtAPSymbolic 3 1.0 7.1388e-01 1.1 0.00e+00 0.0 7.9e+03 2.1e+04 7.5e+01 1 0 2 1 5 3 0 3 2 6 0
MatPtAPNumeric 3 1.0 4.0797e-01 1.1 1.13e+08 2.5 8.9e+02 6.3e+04 6.0e+00 0 2 0 0 0 2 4 0 1 0 6389
MatTrnMatMult 3 1.0 3.2513e+00 1.0 1.87e+08 5.1 3.4e+03 2.1e+05 8.7e+01 3 6 1 5 5 16 11 1 9 7 2349
MatGetLocalMat 15 1.0 1.3235e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.8e+01 0 0 0 0 1 0 0 0 0 1 0
MatGetBrAoCol 9 1.0 1.8796e-01 2.9 0.00e+00 0.0 4.1e+03 4.7e+04 1.2e+01 0 0 1 1 1 1 0 2 2 1 0
MatGetSymTrans 6 1.0 1.8284e-02 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPGMRESOrthog 73 1.0 3.2654e-01 2.3 3.44e+07 1.2 0.0e+00 0.0e+00 7.3e+01 0 1 0 0 5 1 2 0 0 6 4578
KSPSetUp 9 1.0 4.6377e-02 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00 0 0 0 0 1 0 0 0 0 1 0
KSPSolve 1 1.0 1.9749e+01 1.0 1.56e+09 1.2 2.5e+05 3.1e+04 1.3e+03 17 55 57 57 79 98100100100 99 3396
VecDot 42 1.0 9.3734e-01 2.9 1.16e+07 1.1 0.0e+00 0.0e+00 4.2e+01 0 0 0 0 3 2 1 0 0 3 553
VecDotNorm2 21 1.0 9.9729e-01 3.2 2.32e+07 1.1 0.0e+00 0.0e+00 6.3e+01 0 1 0 0 4 3 2 0 0 5 1040
VecMDot 73 1.0 2.9734e-01 4.0 1.72e+07 1.2 0.0e+00 0.0e+00 7.3e+01 0 1 0 0 5 1 1 0 0 6 2514
VecNorm 141 1.0 6.8643e-01 3.4 9.56e+06 1.2 0.0e+00 0.0e+00 1.4e+02 0 0 0 0 9 2 1 0 0 11 614
VecScale 635 1.0 8.8242e-02 3.7 2.86e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 2 0 0 0 14090
VecCopy 177 1.0 9.4576e-02 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 660 1.0 7.7769e-02 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 1078 1.0 3.0908e-01 2.2 1.08e+08 1.2 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 1 7 0 0 0 15168
VecAYPX 1032 1.0 6.9673e-01 3.5 6.71e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 2 4 0 0 0 4193
VecAXPBYCZ 42 1.0 2.4312e-01 4.8 2.32e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 2 0 0 0 4267
VecWAXPY 42 1.0 2.2274e-01 4.6 1.16e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 1 0 0 0 2329
VecMAXPY 119 1.0 8.7651e-02 3.4 2.03e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 10078
VecAssemblyBegin 115 1.0 3.7941e-01 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.4e+02 0 0 0 0 22 1 0 0 0 27 0
VecAssemblyEnd 115 1.0 4.2558e-04 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecPointwiseMult 807 1.0 6.0901e-01 2.3 4.20e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 2 3 0 0 0 3001
VecScatterBegin 1266 1.0 2.3614e-01 2.4 0.00e+00 0.0 2.3e+05 3.0e+04 0.0e+00 0 0 53 50 0 1 0 92 87 0 0
VecScatterEnd 1266 1.0 7.4624e+00 5.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 18 0 0 0 0 0
VecSetRandom 3 1.0 8.0581e-03 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecNormalize 119 1.0 5.9730e-01 5.4 5.23e+06 1.2 0.0e+00 0.0e+00 1.2e+02 0 0 0 0 7 2 0 0 0 9 376
PCSetUp 2 1.0 7.8047e+00 1.0 3.19e+08 1.4 6.3e+04 3.2e+04 1.0e+03 7 11 15 15 63 39 20 26 26 78 1759
PCSetUpOnBlocks 43 1.0 1.1463e-0269.3 5.33e+06 0.0 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 0 0 0 0 0 465
PCApply 43 1.0 9.7608e+00 1.2 1.17e+09 1.3 1.8e+05 2.9e+04 1.4e+02 8 38 42 38 9 46 69 73 67 11 4774
PCGAMGgraph_AGG 3 1.0 9.0476e-01 1.0 3.24e+06 1.3 1.5e+03 1.8e+04 5.7e+01 1 0 0 0 4 4 0 1 0 4 143
PCGAMGcoarse_AGG 3 1.0 4.0595e+00 1.0 1.87e+08 5.1 3.8e+04 3.8e+04 5.3e+02 3 6 9 11 33 20 11 16 19 41 1881
PCGAMGProl_AGG 3 1.0 5.8402e-01 1.0 0.00e+00 0.0 4.2e+03 2.0e+04 8.4e+01 0 0 1 1 5 3 0 2 1 7 0
PCGAMGPOpt_AGG 3 1.0 1.1830e+00 1.0 8.01e+07 1.2 9.9e+03 2.5e+04 1.6e+02 1 3 2 2 10 6 5 4 3 12 2833
--- Event Stage 2: momentum
MatMult 861 1.0 9.5562e+00 1.3 8.82e+08 1.3 1.5e+05 3.6e+04 0.0e+00 7 29 36 41 0 59 64 84 95 0 3713
MatMultAdd 129 1.0 6.3861e-01 2.1 6.14e+07 1.5 1.5e+04 9.1e+03 0.0e+00 0 2 3 1 0 3 4 8 2 0 3452
MatMultTranspose 129 1.0 1.2222e+00 3.6 6.14e+07 1.5 1.5e+04 9.1e+03 0.0e+00 1 2 3 1 0 4 4 8 2 0 1803
MatSolve 8929.7 7.7899e-02 2.1 2.46e+07 2.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 8874
MatLUFactorNum 1 1.0 1.3516e-01 2.0 8.40e+06 1.3 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 1 0 0 0 2759
MatILUFactorSym 1 1.0 1.2454e-01 2.3 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 1 0 0 0 0 0
MatAssemblyBegin 1 1.0 3.8949e-011845.9 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 2 0 0 0 1 0
MatAssemblyEnd 1 1.0 2.3289e-01 1.2 0.00e+00 0.0 1.9e+02 1.0e+05 8.0e+00 0 0 0 0 1 1 0 0 0 3 0
MatGetRowIJ 1 1.0 5.9605e-06 3.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatGetOrdering 1 1.0 1.7303e-02 3.4 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 1 0
KSPGMRESOrthog 43 1.0 1.7178e-02 2.5 5.41e+04 0.0 0.0e+00 0.0e+00 4.3e+01 0 0 0 0 3 0 0 0 0 15 3
KSPSetUp 2 1.0 5.8122e-02 2.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 2 1.0 1.3166e+01 1.0 1.36e+09 1.3 1.8e+05 3.2e+04 2.7e+02 11 45 43 43 17 92100100100 92 4206
VecDot 44 1.0 8.5812e-01 2.4 1.32e+07 1.1 0.0e+00 0.0e+00 4.4e+01 1 0 0 0 3 4 1 0 0 15 691
VecDotNorm2 22 1.0 9.6094e-01 2.1 2.65e+07 1.1 0.0e+00 0.0e+00 6.6e+01 1 1 0 0 4 5 2 0 0 23 1234
VecMDot 43 1.0 1.6975e-02 2.6 2.70e+04 0.0 0.0e+00 0.0e+00 4.3e+01 0 0 0 0 3 0 0 0 0 15 2
VecNorm 110 1.0 7.7599e-01 2.9 7.78e+06 1.2 0.0e+00 0.0e+00 1.1e+02 0 0 0 0 7 3 1 0 0 38 446
VecScale 602 1.0 9.0100e-02 4.2 2.69e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 0 2 0 0 0 12970
VecCopy 176 1.0 1.0720e-01 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecSet 612 1.0 8.4964e-02 3.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 1075 1.0 3.5763e-01 3.1 1.07e+08 1.2 0.0e+00 0.0e+00 0.0e+00 0 4 0 0 0 2 8 0 0 0 13071
VecAYPX 1032 1.0 6.9883e-01 4.0 6.71e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 2 0 0 0 3 5 0 0 0 4181
VecAXPBYCZ 44 1.0 2.5807e-01 4.7 2.65e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 2 0 0 0 4595
VecWAXPY 44 1.0 2.3328e-01 4.7 1.32e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 1 0 0 0 2541
VecMAXPY 86 1.0 2.7514e-04 6.2 5.42e+04 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 197
VecAssemblyBegin 4 1.0 1.2141e-0134.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.2e+01 0 0 0 0 1 0 0 0 0 4 0
VecAssemblyEnd 4 1.0 4.1008e-0510.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecPointwiseMult 774 1.0 8.1749e-01 3.9 4.02e+07 1.2 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 3 3 0 0 0 2144
VecScatterBegin 1119 1.0 2.4684e-01 3.1 0.00e+00 0.0 1.8e+05 3.2e+04 0.0e+00 0 0 43 43 0 1 0100100 0 0
VecScatterEnd 1119 1.0 8.0261e+00 6.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 28 0 0 0 0 0
VecNormalize 86 1.0 4.9263e-0194.5 8.13e+04 0.0 0.0e+00 0.0e+00 8.6e+01 0 0 0 0 5 2 0 0 0 30 0
PCSetUp 2 1.0 2.6639e-01 2.1 8.40e+06 1.3 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 1 1 0 0 2 1400
PCSetUpOnBlocks 44 1.0 2.6618e-01 2.1 8.40e+06 1.3 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 1 1 0 0 1 1401
PCApply 46 1.0 1.0259e+01 1.1 1.18e+09 1.3 1.8e+05 2.9e+04 1.3e+02 8 39 42 38 8 69 85 98 89 44 4609
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
Matrix 6 40 183988644 0
Krylov Solver 2 8 26224 0
Vector 4 54 42591168 0
Vector Scatter 0 8 8480 0
Index Set 0 8 1664536 0
Preconditioner 0 8 7988 0
Viewer 1 0 0 0
--- Event Stage 1: poisson
Matrix 84 51 118292312 0
Matrix Coarsen 3 3 1860 0
Krylov Solver 8 3 90384 0
Vector 170 129 36025624 0
Vector Scatter 22 15 15900 0
Index Set 55 50 424808 0
Preconditioner 9 3 2592 0
PetscRandom 3 3 1848 0
--- Event Stage 2: momentum
Matrix 1 0 0 0
Krylov Solver 1 0 0 0
Vector 10 1 1504 0
Vector Scatter 1 0 0 0
Index Set 5 2 205792 0
Preconditioner 2 0 0 0
========================================================================================================================
Average time to get PetscTime(): 1.90735e-07
Average time for MPI_Barrier(): 3.9196e-05
Average time for zero size MPI_Send(): 2.75373e-05
#PETSc Option Table entries:
-log_summary
-poisson_pc_gamg_agg_nsmooths 1
-poisson_pc_type gamg
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure run at: Mon Oct 1 11:36:09 2012
Configure options: --with-mpi-dir=/opt/openmpi-1.5.3/ --with-blas-lapack-dir=/opt/intelcpro-11.1.059/mkl/lib/em64t/ --with-debugging=0 --download-hypre=1 --prefix=/home/wtay/Lib/petsc-3.3-dev_shared_rel --known-mpi-shared=1 --with-shared-libraries
More information about the petsc-users
mailing list