[petsc-users] GAMG Indefinite

Sanjay Govindjee s_g at berkeley.edu
Thu May 19 13:07:46 CDT 2016


I am trying to solve a very ordinary nonlinear elasticity problem
using -ksp_type cg -pc_type gamg in PETSc 3.7.0, which worked fine
in PETSc 3.5.3.

The problem I am seeing is on my first Newton iteration, the Ax=b
solve returns with and Indefinite Preconditioner error 
(KSPGetConvergedReason == -8):
(log_view.txt output also attached)

   0 KSP Residual norm 8.411630828687e-02
   1 KSP Residual norm 2.852209578900e-02
   NO CONVERGENCE REASON:  Indefinite Preconditioner
   NO CONVERGENCE REASON:  Indefinite Preconditioner

On the next and subsequent Newton iterations, I see perfectly normal
behavior and the problem converges quadratically.  The results look fine.

I tried the same problem with -pc_type jacobi as well as super-lu, and 
mumps
and they all work without complaint.

My run line for GAMG is:
-ksp_type cg -ksp_monitor -log_view -pc_type gamg -pc_gamg_type agg 
-pc_gamg_agg_nsmooths 1 -options_left

The code flow looks like:

    ! If no matrix allocation yet
    if(Kmat.eq.0) then
       call MatCreate(PETSC_COMM_WORLD,Kmat,ierr)
       call
    MatSetSizes(Kmat,numpeq,numpeq,PETSC_DETERMINE,PETSC_DETERMINE,ierr)
       call MatSetBlockSize(Kmat,nsbk,ierr)
       call MatSetFromOptions(Kmat, ierr)
       call MatSetType(Kmat,MATAIJ,ierr)
       call
    MatMPIAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),PETSC_NULL_INTEGER,mr(np(247)),ierr)
       call
    MatSeqAIJSetPreallocation(Kmat,PETSC_NULL_INTEGER,mr(np(246)),ierr)
    endif

    call MatZeroEntries(Kmat,ierr)

    ! Code to set values in matrix

    call MatAssemblyBegin(Kmat, MAT_FINAL_ASSEMBLY, ierr)
    call MatAssemblyEnd(Kmat, MAT_FINAL_ASSEMBLY, ierr)
    call MatSetOption(Kmat,MAT_NEW_NONZERO_LOCATIONS,PETSC_TRUE,ierr)

    ! If no rhs allocation yet
    if(rhs.eq.0) then
       call VecCreate        (PETSC_COMM_WORLD, rhs, ierr)
       call VecSetSizes      (rhs, numpeq, PETSC_DECIDE, ierr)
       call VecSetFromOptions(rhs, ierr)
    endif

    ! Code to set values in RHS

    call VecAssemblyBegin(rhs, ierr)
    call VecAssemblyEnd(rhs, ierr)

    if(kspsol_exists) then
       call KSPDestroy(kspsol,ierr)
    endif

    call KSPCreate(PETSC_COMM_WORLD, kspsol   ,ierr)
    call KSPSetOperators(kspsol, Kmat, Kmat, ierr)
    call KSPSetFromOptions(kspsol,ierr)
    call KSPGetPC(kspsol, pc ,      ierr)

    call PCSetCoordinates(pc,ndm,numpn,hr(np(43)),ierr)

    call KSPSolve(kspsol, rhs, sol, ierr)
    call KSPGetConvergedReason(kspsol,reason,ierr)

    ! update solution, go back to the top


reason is coming back as -8 on my first Ax=b solve and 2 or 3 after that
(with gamg).  With the other solvers it is coming back as 2 or 3 for
iterative options and 4 if I use one of the direct solvers.

Any ideas on what is causing the Indefinite PC on the first iteration 
with GAMG?

Thanks in advance,
-sanjay

-- 
-----------------------------------------------
Sanjay Govindjee, PhD, PE
Professor of Civil Engineering

779 Davis Hall
University of California
Berkeley, CA 94720-1710

Voice:  +1 510 642 6060
FAX:    +1 510 643 5264
s_g at berkeley.edu
http://www.ce.berkeley.edu/~sanjay
-----------------------------------------------

Books:

Engineering Mechanics of Deformable
Solids: A Presentation with Exercises
http://www.oup.com/us/catalog/general/subject/Physics/MaterialsScience/?view=usa&ci=9780199651641
http://ukcatalogue.oup.com/product/9780199651641.do
http://amzn.com/0199651647

Engineering Mechanics 3 (Dynamics) 2nd Edition
http://www.springer.com/978-3-642-53711-0
http://amzn.com/3642537111

Engineering Mechanics 3, Supplementary Problems: Dynamics
http://www.amzn.com/B00SOXN8JU

-----------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160519/b248c5ee/attachment.html>
-------------- next part --------------
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

/Users/sg/Feap/ver85/parfeap/feap
on a intel named localhost with 2 processors, by sg Thu May 19 10:56:48 2016
Using Petsc Release Version 3.7.0, Apr, 25, 2016 

                         Max       Max/Min        Avg      Total 
Time (sec):           1.470e+01      1.00000   1.470e+01
Objects:              1.436e+03      1.00701   1.431e+03
Flops:                2.443e+07      1.12507   2.307e+07  4.615e+07
Flops/sec:            1.662e+06      1.12507   1.570e+06  3.140e+06
MPI Messages:         6.865e+02      1.00000   6.865e+02  1.373e+03
MPI Message Lengths:  4.680e+05      1.00000   6.817e+02  9.360e+05
MPI Reductions:       2.026e+03      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 1.4698e+01 100.0%  4.6146e+07 100.0%  1.373e+03 100.0%  6.817e+02      100.0%  2.025e+03 100.0% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

MatMult              702 1.0 5.1281e-03 1.1 1.06e+07 1.1 7.4e+02 5.5e+02 0.0e+00  0 44 54 43  0   0 44 54 43  0  3930
MatMultAdd            78 1.0 1.1404e-03 5.2 2.96e+05 1.2 3.9e+01 3.1e+02 0.0e+00  0  1  3  1  0   0  1  3  1  0   469
MatMultTranspose      78 1.0 3.2711e-04 1.3 2.96e+05 1.2 3.9e+01 3.1e+02 0.0e+00  0  1  3  1  0   0  1  3  1  0  1635
MatSolve              39 0.0 3.0041e-05 0.0 5.97e+03 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   199
MatSOR               578 1.0 3.9439e-03 1.1 8.20e+06 1.1 0.0e+00 0.0e+00 0.0e+00  0 33  0  0  0   0 33  0  0  0  3912
MatLUFactorSym         5 1.0 4.8161e-05 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         5 1.0 4.6730e-05 7.3 2.24e+03 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0    48
MatScale              30 1.0 1.2112e-04 1.1 7.26e+04 1.1 1.0e+01 2.8e+02 0.0e+00  0  0  1  0  0   0  0  1  0  0  1124
MatResidual           78 1.0 5.8436e-04 1.1 1.16e+06 1.1 7.8e+01 5.5e+02 0.0e+00  0  5  6  5  0   0  5  6  5  0  3738
MatAssemblyBegin     230 1.0 2.0638e-03 3.1 0.00e+00 0.0 6.0e+01 1.1e+03 1.7e+02  0  0  4  7  8   0  0  4  7  8     0
MatAssemblyEnd       230 1.0 4.3290e-03 1.0 0.00e+00 0.0 1.3e+02 6.9e+01 5.4e+02  0  0 10  1 26   0  0 10  1 26     0
MatGetRow          12260 1.1 1.3771e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetRowIJ            5 0.0 1.1921e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetSubMatrix       10 1.0 2.0628e-03 1.0 0.00e+00 0.0 3.5e+01 1.4e+03 1.7e+02  0  0  3  5  8   0  0  3  5  8     0
MatGetOrdering         5 0.0 1.0109e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen            10 1.0 4.8208e-04 1.0 0.00e+00 0.0 5.5e+01 2.9e+02 1.5e+01  0  0  4  2  1   0  0  4  2  1     0
MatZeroEntries        15 1.0 1.3089e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAXPY               10 1.0 8.3613e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01  0  0  0  0  1   0  0  0  0  1     0
MatMatMult            10 1.0 3.6321e-03 1.0 4.16e+05 1.1 6.0e+01 7.4e+02 1.6e+02  0  2  4  5  8   0  2  4  5  8   223
MatMatMultSym         10 1.0 2.8400e-03 1.0 0.00e+00 0.0 5.0e+01 5.6e+02 1.4e+02  0  0  4  3  7   0  0  4  3  7     0
MatMatMultNum         10 1.0 7.6842e-04 1.0 4.16e+05 1.1 1.0e+01 1.7e+03 2.0e+01  0  2  1  2  1   0  2  1  2  1  1056
MatPtAP               10 1.0 7.5903e-03 1.0 1.89e+06 1.3 1.1e+02 1.4e+03 1.7e+02  0  7  8 16  8   0  7  8 16  8   447
MatPtAPSymbolic       10 1.0 5.4739e-03 1.0 0.00e+00 0.0 6.0e+01 1.6e+03 7.0e+01  0  0  4 10  3   0  0  4 10  3     0
MatPtAPNumeric        10 1.0 2.1083e-03 1.0 1.89e+06 1.3 5.0e+01 1.1e+03 1.0e+02  0  7  4  6  5   0  7  4  6  5  1609
MatTrnMatMult          5 1.0 7.2398e-03 1.0 5.69e+05 1.2 6.0e+01 2.5e+03 9.5e+01  0  2  4 16  5   0  2  4 16  5   146
MatTrnMatMultSym       5 1.0 4.4360e-03 1.0 0.00e+00 0.0 5.0e+01 1.1e+03 8.5e+01  0  0  4  6  4   0  0  4  6  4     0
MatTrnMatMultNum       5 1.0 2.7852e-03 1.0 5.69e+05 1.2 1.0e+01 9.7e+03 1.0e+01  0  2  1 10  0   0  2  1 10  0   379
MatGetLocalMat        40 1.0 5.4884e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         30 1.0 4.6182e-04 1.0 0.00e+00 0.0 7.0e+01 1.8e+03 0.0e+00  0  0  5 13  0   0  0  5 13  0     0
VecDot                 5 1.0 2.1219e-05 1.7 4.42e+03 1.0 0.0e+00 0.0e+00 5.0e+00  0  0  0  0  0   0  0  0  0  0   415
VecMDot              200 1.0 6.1178e-04 1.6 5.42e+05 1.1 0.0e+00 0.0e+00 2.0e+02  0  2  0  0 10   0  2  0  0 10  1680
VecTDot               69 1.0 2.8133e-04 4.3 6.09e+04 1.0 0.0e+00 0.0e+00 6.9e+01  0  0  0  0  3   0  0  0  0  3   432
VecNorm              259 1.0 3.7003e-04 1.2 1.43e+05 1.1 0.0e+00 0.0e+00 2.6e+02  0  1  0  0 13   0  1  0  0 13   742
VecScale             220 1.0 5.5313e-05 1.2 5.43e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  1861
VecCopy              108 1.0 3.4094e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet               512 1.0 8.6308e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY               88 1.0 2.4319e-05 1.0 6.97e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  5702
VecAYPX              653 1.0 2.3270e-04 1.1 2.18e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  1789
VecAXPBYCZ           312 1.0 1.6689e-04 1.3 3.85e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  2  0  0  0   0  2  0  0  0  4375
VecMAXPY             220 1.0 2.4390e-04 1.1 6.42e+05 1.1 0.0e+00 0.0e+00 0.0e+00  0  3  0  0  0   0  3  0  0  0  4989
VecAssemblyBegin     115 1.0 9.1553e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyEnd       115 1.0 1.0848e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult     110 1.0 5.8174e-05 1.3 2.72e+04 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0   885
VecScatterBegin      983 1.0 3.4928e-04 1.1 0.00e+00 0.0 9.5e+02 5.4e+02 0.0e+00  0  0 69 54  0   0  0 69 54  0     0
VecScatterEnd        983 1.0 1.2131e-03 3.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSetRandom          10 1.0 7.6532e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         220 1.0 4.5967e-04 1.2 1.63e+05 1.1 0.0e+00 0.0e+00 2.2e+02  0  1  0  0 11   0  1  0  0 11   672
BuildTwoSided         10 1.0 9.7036e-05 1.1 0.00e+00 0.0 5.0e+00 4.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
BuildTwoSidedF       110 1.0 6.7568e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPGMRESOrthog       200 1.0 9.0504e-04 1.3 1.09e+06 1.1 0.0e+00 0.0e+00 2.0e+02  0  4  0  0 10   0  4  0  0 10  2273
KSPSetUp              45 1.0 4.7064e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01  0  0  0  0  1   0  0  0  0  1     0
KSPSolve               5 1.0 5.5117e-02 1.0 2.44e+07 1.1 1.4e+03 6.8e+02 2.0e+03  0100100100 98   0100100100 98   837
PCGAMGGraph_AGG       10 1.0 9.7408e-03 1.0 3.46e+04 1.1 5.0e+01 1.1e+02 2.6e+02  0  0  4  1 13   0  0  4  1 13     7
PCGAMGCoarse_AGG      10 1.0 8.4352e-03 1.0 5.69e+05 1.2 1.6e+02 1.2e+03 1.3e+02  0  2 12 21  6   0  2 12 21  6   125
PCGAMGProl_AGG        10 1.0 2.5477e-03 1.0 0.00e+00 0.0 1.2e+02 4.5e+02 3.1e+02  0  0  9  6 15   0  0  9  6 15     0
PCGAMGPOpt_AGG        10 1.0 7.8416e-03 1.0 2.62e+06 1.1 1.6e+02 6.2e+02 4.7e+02  0 11 12 11 23   0 11 12 11 23   634
GAMG: createProl      10 1.0 2.8838e-02 1.0 3.14e+06 1.1 5.0e+02 7.2e+02 1.2e+03  0 13 36 38 58   0 13 36 38 58   211
  Graph               20 1.0 9.5172e-03 1.0 3.46e+04 1.1 5.0e+01 1.1e+02 2.6e+02  0  0  4  1 13   0  0  4  1 13     7
  MIS/Agg             10 1.0 5.4216e-04 1.0 0.00e+00 0.0 5.5e+01 2.9e+02 1.5e+01  0  0  4  2  1   0  0  4  2  1     0
  SA: col data        10 1.0 1.2393e-03 1.0 0.00e+00 0.0 7.0e+01 6.4e+02 1.7e+02  0  0  5  5  8   0  0  5  5  8     0
  SA: frmProl0        10 1.0 1.0064e-03 1.0 0.00e+00 0.0 5.0e+01 1.8e+02 1.0e+02  0  0  4  1  5   0  0  4  1  5     0
  SA: smooth          10 1.0 7.8394e-03 1.0 2.62e+06 1.1 1.6e+02 6.2e+02 4.7e+02  0 11 12 11 23   0 11 12 11 23   634
GAMG: partLevel       10 1.0 1.0525e-02 1.0 1.89e+06 1.3 1.6e+02 1.3e+03 4.4e+02  0  7 12 22 21   0  7 12 22 21   322
  repartition          5 1.0 5.2214e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+01  0  0  0  0  1   0  0  0  0  1     0
  Invert-Sort          5 1.0 1.1015e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+01  0  0  0  0  1   0  0  0  0  1     0
  Move A               5 1.0 9.7775e-04 1.0 0.00e+00 0.0 2.5e+01 1.9e+03 9.0e+01  0  0  2  5  4   0  0  2  5  4     0
  Move P               5 1.0 1.4322e-03 1.0 0.00e+00 0.0 1.0e+01 8.0e+01 9.0e+01  0  0  1  0  4   0  0  1  0  4     0
PCSetUp               10 1.0 4.1227e-02 1.0 5.03e+06 1.1 6.6e+02 8.5e+02 1.7e+03  0 21 48 60 82   0 21 48 60 82   230
PCSetUpOnBlocks       39 1.0 2.8324e-04 2.0 2.24e+03 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     8
PCApply               39 1.0 1.3067e-02 1.0 1.83e+07 1.1 6.5e+02 5.2e+02 2.1e+02  0 75 47 36 10   0 75 47 36 10  2643
SFSetGraph            10 1.0 2.5988e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          35 1.0 2.0766e-04 1.0 0.00e+00 0.0 5.5e+01 2.9e+02 0.0e+00  0  0  4  2  0   0  0  4  2  0     0
SFBcastEnd            35 1.0 3.2187e-05 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Matrix   318            318      4176740     0.
      Matrix Coarsen    10             10         6360     0.
              Vector   714            714      2149632     0.
      Vector Scatter    71             71        81416     0.
           Index Set   217            217       177272     0.
       Krylov Solver    45             45       635400     0.
      Preconditioner    45             45        43520     0.
         PetscRandom     5              5         3230     0.
              Viewer     1              0            0     0.
Star Forest Bipartite Graph    10             10         8640     0.
========================================================================================================================
Average time to get PetscTime(): 0.
Average time for MPI_Barrier(): 1.90735e-07
Average time for zero size MPI_Send(): 5.00679e-06
#PETSc Option Table entries:
-ksp_monitor
-ksp_type cg
-log_view
-options_left
-pc_gamg_agg_nsmooths 1
-pc_gamg_type agg
-pc_type gamg
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: --with-cc=icc --with-cxx=icpc --with-fc=ifort --download-parmetis --download-superlu_dist --download-openmpi --download-hypre --download-metis --download-mumps --download-scalapack --download-blacs
--with-debugging=0
-----------------------------------------
Libraries compiled on Thu May 12 11:38:33 2016 on ucbvpn-208-160.vpn.berkeley.edu 
Machine characteristics: Darwin-13.4.0-x86_64-i386-64bit
Using PETSc directory: /Users/sg/petsc-3.7.0/
Using PETSc arch: intel
-----------------------------------------

Using C compiler: /Users/sg/petsc-3.7.0/intel/bin/mpicc    -wd1572 -g -O3  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /Users/sg/petsc-3.7.0/intel/bin/mpif90    -g -O3  ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/Users/sg/petsc-3.7.0/intel/include -I/Users/sg/petsc-3.7.0/include -I/Users/sg/petsc-3.7.0/include -I/Users/sg/petsc-3.7.0/intel/include -I/opt/X11/include
-----------------------------------------

Using C linker: /Users/sg/petsc-3.7.0/intel/bin/mpicc
Using Fortran linker: /Users/sg/petsc-3.7.0/intel/bin/mpif90
Using libraries: -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -lpetsc -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -lsuperlu_dist -lcmumps -ldmumps -lsmumps -lzmumps
-lmumps_common -lpord -lparmetis -lmetis -lHYPRE -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib
-L/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -L/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib
-L/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -limf -lsvml -lirng -lipgo -ldecimal -lirc -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/6.0/lib/darwin
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/clang/6.0/lib/darwin -lclang_rt.osx -limf -lsvml -lirng -lipgo -ldecimal -lirc
-Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin -lclang_rt.osx -lscalapack -llapack -lblas -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -lX11 -lssl -lcrypto
-lmpi_usempif08 -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -limf -lsvml -lipgo -lirc -lpthread -lclang_rt.osx -limf -lsvml -lirng -lipgo -ldecimal -lirc -lclang_rt.osx -limf -lsvml -lirng -lipgo -ldecimal -lirc
-lclang_rt.osx -ldl -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib -lmpi -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -L/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -L/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -L/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-limf -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lsvml -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lirng -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lipgo
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -ldecimal -lc++ -lSystem -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -lirc -Wl,-rpath,/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin
-L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/6.0/lib/darwin -lclang_rt.osx -Wl,-rpath,/Users/sg/petsc-3.7.0/intel/lib -L/Users/sg/petsc-3.7.0/intel/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -L/opt/intel/composer_xe_2013_sp1.3.166/ipp/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -L/opt/intel/composer_xe_2013_sp1.3.166/mkl/lib -Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib -L/opt/intel/composer_xe_2013_sp1.3.166/tbb/lib
-Wl,-rpath,/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -L/opt/intel/composer_xe_2013_sp1.3.166/compiler/lib -ldl 
-----------------------------------------

#PETSc Option Table entries:
-ksp_monitor
-ksp_type cg
-log_view
-options_left
-pc_gamg_agg_nsmooths 1
-pc_gamg_type agg
-pc_type gamg
#End of PETSc Option Table entries
There are no unused options.


More information about the petsc-users mailing list