[petsc-users] Multigrid preconditioning of entire linear systems for discretized coupled multiphysics problems

Fabian Gabel gabel.fabian at gmail.com
Mon Mar 2 18:39:34 CST 2015


On Mo, 2015-03-02 at 16:29 -0700, Jed Brown wrote:
> Fabian Gabel <gabel.fabian at gmail.com> writes:
> 
> > Dear PETSc Team,
> >
> > I came across the following paragraph in your publication "Composable
> > Linear Solvers for Multiphysics" (2012):
> >
> > "Rather than splitting the matrix into large blocks and
> > forming a preconditioner from solvers (for example, multi-
> > grid) on each block, one can perform multigrid on the entire
> > system, basing the smoother on solves coming from the tiny
> > blocks coupling the degrees of freedom at a single point (or
> > small number of points). This approach is also handled in
> > PETSc, but we will not elaborate on it here."
> >
> > How would I use a multigrid preconditioner (GAMG) 
> 
> The heuristics in GAMG are not appropriate for indefinite/saddle-point
> systems such as arise from Navier-Stokes.  You can use geometric
> multigrid and use the fieldsplit techniques described in the paper as a
> smoother, for example.

I sadly don't have a solid background on multigrid methods, but as
mentioned in a previous thread

http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024219.html

AMG has apparently been used (successfully?) for fully-coupled
finite-volume discretizations of Navier-Stokes:

http://dx.doi.org/10.1080/10407790.2014.894448
http://dx.doi.org/10.1016/j.jcp.2008.08.027

I was hoping to achieve something similar with the right configuration
of the PETSc preconditioners. So far I have only been using GAMG in a
straightforward manner, without providing any details on the structure
of the linear system. I attached the output of a test run with GAMG.

> 
> > from PETSc on linear systems of the form (after reordering the
> > variables):
> >
> > [A_uu   0     0   A_up  A_uT]
> > [0    A_vv    0   A_vp  A_vT]
> > [0      0   A_ww  A_up  A_wT]
> > [A_pu A_pv  A_pw  A_pp   0  ]
> > [A_Tu A_Tv  A_Tw  A_Tp  A_TT]
> >
> > where each of the block matrices A_ij, with i,j in {u,v,w,p,T}, results
> > directly from a FVM discretization of the incompressible Navier-Stokes
> > equations and the temperature equation. The fifth row and column are
> > optional, depending on the method I choose to couple the temperature.
> > The Matrix is stored as one AIJ Matrix.
> >
> > Regards,
> > Fabian Gabel

-------------- next part --------------
Sender: LSF System <lsfadmin at hpa0678>
Subject: Job 578677: <coupling_cpld_scalability_openmpi> in cluster <lichtenberg> Done

Job <coupling_cpld_scalability_openmpi> was submitted from host <hla0003> by user <gu08vomo> in cluster <lichtenberg>.
Job was executed on host(s) <16*hpa0678>, in queue <short>, as user <gu08vomo> in cluster <lichtenberg>.
                            <16*hpa0611>
                            <16*hpa0665>
                            <16*hpa0649>
                            <16*hpa0616>
                            <16*hpa0559>
                            <16*hpa0577>
                            <16*hpa0618>
</home/gu08vomo> was used as the home directory.
</work/scratch/gu08vomo/thesis/coupling/128_1024_cpld> was used as the working directory.
Started at Tue Mar  3 00:06:32 2015
Results reported at Tue Mar  3 00:08:14 2015

Your job looked like:

------------------------------------------------------------
# LSBATCH: User input
#! /bin/sh

#BSUB -J coupling_cpld_scalability_openmpi

#BSUB -o /home/gu08vomo/thesis/coupling_128/scalability_openmpi_mpi1/cpld_0128.out.%J

#BSUB -n 0128
#BSUB -W 0:10
#BSUB -x

##BSUB -q test_mpi2

#BSUB -a openmpi

module load openmpi/intel/1.8.2
export PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.2/build/arch-openmpi-opt-intel-hlr
export MYWORKDIR=/work/scratch/gu08vomo/thesis/coupling/128_1024_cpld/128_1024_0128/
export OUTPUTDIR=/home/gu08vomo/thesis/coupling
export PETSC_OPS="-options_file ../ops.gamg"

cat ../ops.gamg

echo "SCALABILITY MEASUREMENT ON MPI1 WITH OPENMPI AND FIELDSPLIT PRECONDITIONING"

echo "PETSC_DIR="$PETSC_DIR
echo "MYWORKDIR="$MYWORKDIR
cd $MYWORKDIR
mpirun -report-bindings -map-by core -bind-to core -n 0128 ./openmpi.caffa3d.cpld.lnx ${PETSC_OPS}



------------------------------------------------------------

Successfully completed.

Resource usage summary:

    CPU time :               8314.24 sec.
    Max Memory :             9883 MB
    Average Memory :         3650.71 MB
    Total Requested Memory : -
    Delta Memory :           -
    (Delta: the difference between total requested memory and actual max usage.)
    Max Swap :               27578 MB

    Max Processes :          55
    Max Threads :            201

The output (if any) follows:

Modules: loading openmpi/intel/1.8.2
cat: ../ops.gamg: No such file or directory
SCALABILITY MEASUREMENT ON MPI1 WITH OPENMPI AND FIELDSPLIT PRECONDITIONING
PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.2/build/arch-openmpi-opt-intel-hlr
MYWORKDIR=/work/scratch/gu08vomo/thesis/coupling/128_1024_cpld/128_1024_0128/
[hpa0678:13880] MCW rank 3 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0678:13880] MCW rank 4 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0678:13880] MCW rank 5 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0678:13880] MCW rank 6 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0678:13880] MCW rank 7 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0678:13880] MCW rank 8 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0678:13880] MCW rank 9 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0678:13880] MCW rank 10 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0678:13880] MCW rank 11 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0678:13880] MCW rank 12 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0678:13880] MCW rank 13 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0678:13880] MCW rank 14 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0678:13880] MCW rank 15 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0678:13880] MCW rank 0 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0678:13880] MCW rank 1 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0678:13880] MCW rank 2 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0611:03178] MCW rank 23 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0611:03178] MCW rank 24 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0611:03178] MCW rank 25 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0611:03178] MCW rank 26 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0611:03178] MCW rank 27 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0611:03178] MCW rank 28 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0611:03178] MCW rank 29 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0611:03178] MCW rank 30 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0611:03178] MCW rank 31 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0611:03178] MCW rank 16 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0611:03178] MCW rank 17 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0611:03178] MCW rank 18 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0611:03178] MCW rank 19 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0611:03178] MCW rank 20 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0611:03178] MCW rank 21 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0611:03178] MCW rank 22 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0616:32359] MCW rank 72 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0616:32359] MCW rank 73 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0616:32359] MCW rank 74 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0616:32359] MCW rank 75 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0616:32359] MCW rank 76 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0616:32359] MCW rank 77 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0616:32359] MCW rank 78 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0616:32359] MCW rank 79 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0616:32359] MCW rank 64 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0616:32359] MCW rank 65 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0616:32359] MCW rank 66 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0616:32359] MCW rank 67 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0616:32359] MCW rank 68 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0616:32359] MCW rank 69 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0616:32359] MCW rank 70 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0616:32359] MCW rank 71 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0665:23912] MCW rank 46 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0665:23912] MCW rank 47 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0665:23912] MCW rank 32 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0665:23912] MCW rank 33 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0665:23912] MCW rank 34 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0665:23912] MCW rank 35 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0665:23912] MCW rank 36 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0665:23912] MCW rank 37 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0665:23912] MCW rank 38 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0665:23912] MCW rank 39 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0665:23912] MCW rank 40 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0665:23912] MCW rank 41 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0665:23912] MCW rank 42 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0665:23912] MCW rank 43 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0665:23912] MCW rank 44 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0665:23912] MCW rank 45 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0559:03858] MCW rank 87 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0559:03858] MCW rank 88 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0559:03858] MCW rank 89 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0559:03858] MCW rank 90 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0559:03858] MCW rank 91 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0559:03858] MCW rank 92 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0559:03858] MCW rank 93 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0559:03858] MCW rank 94 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0559:03858] MCW rank 95 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0559:03858] MCW rank 80 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0559:03858] MCW rank 81 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0559:03858] MCW rank 82 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0559:03858] MCW rank 83 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0559:03858] MCW rank 84 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0559:03858] MCW rank 85 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0559:03858] MCW rank 86 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0649:21015] MCW rank 63 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0649:21015] MCW rank 48 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0649:21015] MCW rank 49 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0649:21015] MCW rank 50 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0649:21015] MCW rank 51 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0649:21015] MCW rank 52 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0649:21015] MCW rank 53 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0649:21015] MCW rank 54 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0649:21015] MCW rank 55 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0649:21015] MCW rank 56 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0649:21015] MCW rank 57 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0649:21015] MCW rank 58 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0649:21015] MCW rank 59 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0649:21015] MCW rank 60 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0649:21015] MCW rank 61 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0649:21015] MCW rank 62 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0577:03197] MCW rank 103 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0577:03197] MCW rank 104 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0577:03197] MCW rank 105 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0577:03197] MCW rank 106 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0577:03197] MCW rank 107 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0577:03197] MCW rank 108 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0577:03197] MCW rank 109 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0577:03197] MCW rank 110 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0577:03197] MCW rank 111 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0577:03197] MCW rank 96 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0577:03197] MCW rank 97 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0577:03197] MCW rank 98 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0577:03197] MCW rank 99 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0577:03197] MCW rank 100 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0577:03197] MCW rank 101 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0577:03197] MCW rank 102 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
[hpa0618:15697] MCW rank 119 bound to socket 0[core 7[hwt 0]]: [./././././././B][./././././././.]
[hpa0618:15697] MCW rank 120 bound to socket 1[core 8[hwt 0]]: [./././././././.][B/././././././.]
[hpa0618:15697] MCW rank 121 bound to socket 1[core 9[hwt 0]]: [./././././././.][./B/./././././.]
[hpa0618:15697] MCW rank 122 bound to socket 1[core 10[hwt 0]]: [./././././././.][././B/././././.]
[hpa0618:15697] MCW rank 123 bound to socket 1[core 11[hwt 0]]: [./././././././.][./././B/./././.]
[hpa0618:15697] MCW rank 124 bound to socket 1[core 12[hwt 0]]: [./././././././.][././././B/././.]
[hpa0618:15697] MCW rank 125 bound to socket 1[core 13[hwt 0]]: [./././././././.][./././././B/./.]
[hpa0618:15697] MCW rank 126 bound to socket 1[core 14[hwt 0]]: [./././././././.][././././././B/.]
[hpa0618:15697] MCW rank 127 bound to socket 1[core 15[hwt 0]]: [./././././././.][./././././././B]
[hpa0618:15697] MCW rank 112 bound to socket 0[core 0[hwt 0]]: [B/././././././.][./././././././.]
[hpa0618:15697] MCW rank 113 bound to socket 0[core 1[hwt 0]]: [./B/./././././.][./././././././.]
[hpa0618:15697] MCW rank 114 bound to socket 0[core 2[hwt 0]]: [././B/././././.][./././././././.]
[hpa0618:15697] MCW rank 115 bound to socket 0[core 3[hwt 0]]: [./././B/./././.][./././././././.]
[hpa0618:15697] MCW rank 116 bound to socket 0[core 4[hwt 0]]: [././././B/././.][./././././././.]
[hpa0618:15697] MCW rank 117 bound to socket 0[core 5[hwt 0]]: [./././././B/./.][./././././././.]
[hpa0618:15697] MCW rank 118 bound to socket 0[core 6[hwt 0]]: [././././././B/.][./././././././.]
  ENTER PROBLEM NAME (SIX CHARACTERS):  
 ****************************************************
 NAME OF PROBLEM SOLVED control
 
 ****************************************************
 ***************************************************
 CONTROL SETTINGS
 ***************************************************
 LREAD,LWRITE,LPOST,LTEST,LOUTS,LOUTE,LTIME,LGRAD
 F F T F F F F F
  IMON, JMON, KMON, MMON, RMON,  IPR,  JPR,  KPR,  MPR,NPCOR,NIGRAD 
     2     2     2     1     0     2     2     3     1     1     1
  SORMAX,     SLARGE,     ALFA
  0.1000E-07  0.1000E+31  0.9200E+00
 (URF(I),I=1,6)
  0.1000E+01  0.1000E+01  0.1000E+01  0.1000E+01  0.1000E+01  0.1000E+01
 (SOR(I),I=1,6)
  0.1000E+00  0.1000E+00  0.1000E+00  0.1000E+00  0.1000E+00  0.1000E+00
 (GDS(I),I=1,6) - BLENDING (CDS-UDS)
  0.1000E+01  0.1000E+01  0.1000E+01  0.1000E+01  0.7000E+00  0.0000E+00
 LSG
100000
 ***************************************************
 START COUPLED ALGORITHM
 ***************************************************
Linear solve converged due to CONVERGED_ATOL iterations 6
KSP Object:(coupledsolve_) 128 MPI processes
  type: gmres
    GMRES: restart=100, using Modified Gram-Schmidt Orthogonalization
    GMRES: happy breakdown tolerance 1e-30
  maximum iterations=10000, initial guess is zero
  tolerances:  relative=1e-90, absolute=1.10423, divergence=10000
  right preconditioning
  using UNPRECONDITIONED norm type for convergence test
PC Object:(coupledsolve_) 128 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=4 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (coupledsolve_mg_coarse_)     128 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (coupledsolve_mg_coarse_)     128 MPI processes
      type: bjacobi
        block Jacobi: number of blocks = 128
        Local solve is same for all blocks, in the following KSP and PC objects:
      KSP Object:      (coupledsolve_mg_coarse_sub_)       1 MPI processes
        type: preonly
        maximum iterations=1, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
        left preconditioning
        using NONE norm type for convergence test
      PC Object:      (coupledsolve_mg_coarse_sub_)       1 MPI processes
        type: sor
          SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
        linear system matrix = precond matrix:
        Mat Object:         1 MPI processes
          type: seqaij
          rows=27, cols=27
          total: nonzeros=653, allocated nonzeros=653
          total number of mallocs used during MatSetValues calls =0
            not using I-node routines
      linear system matrix = precond matrix:
      Mat Object:       128 MPI processes
        type: mpiaij
        rows=27, cols=27
        total: nonzeros=653, allocated nonzeros=653
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (coupledsolve_mg_levels_1_)     128 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (coupledsolve_mg_levels_1_)     128 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Mat Object:       128 MPI processes
        type: mpiaij
        rows=2918, cols=2918
        total: nonzeros=221560, allocated nonzeros=221560
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (coupledsolve_mg_levels_2_)     128 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (coupledsolve_mg_levels_2_)     128 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Mat Object:       128 MPI processes
        type: mpiaij
        rows=488741, cols=488741
        total: nonzeros=4.42626e+07, allocated nonzeros=4.42626e+07
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (coupledsolve_mg_levels_3_)     128 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=2
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      has attached null space
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (coupledsolve_mg_levels_3_)     128 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Mat Object:       128 MPI processes
        type: mpiaij
        rows=13271040, cols=13271040
        total: nonzeros=1.507e+08, allocated nonzeros=1.507e+08
        total number of mallocs used during MatSetValues calls =0
          has attached null space
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Mat Object:   128 MPI processes
    type: mpiaij
    rows=13271040, cols=13271040
    total: nonzeros=1.507e+08, allocated nonzeros=1.507e+08
    total number of mallocs used during MatSetValues calls =0
      has attached null space
      not using I-node (on process 0) routines
 0000001  0.1000E+01  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 9
 0000002  0.2440E+00  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 14
 0000003  0.4575E-01  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 19
 0000004  0.1901E-01  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 41
 0000005  0.4321E-02  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 30
 0000006  0.1885E-02  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 41
 0000007  0.4674E-03  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 20
 0000008  0.2057E-03  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 43
 0000009  0.5536E-04  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 37
 0000010  0.2371E-04  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 43
 0000011  0.7186E-05  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 36
 0000012  0.2985E-05  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 44
 0000013  0.1013E-05  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 31
 0000014  0.4192E-06  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 38
 0000015  0.1592E-06  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 25
 0000016  0.6564E-07  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 39
 0000017  0.2712E-07  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 36
 0000018  0.1140E-07  0.0000E+00
Linear solve converged due to CONVERGED_ATOL iterations 39
 0000019  0.4980E-08  0.0000E+00
TIME FOR CALCULATION:  0.5863E+02
 L2-NORM ERROR U    VELOCITY  2.803885678347621E-005
 L2-NORM ERROR V    VELOCITY  2.790913623092557E-005
 L2-NORM ERROR W    VELOCITY  2.917203293110774E-005
 L2-NORM ERROR ABS. VELOCITY  3.168713181612872E-005
 L2-NORM ERROR      PRESSURE  1.392940412005762E-003
      *** CALCULATION FINISHED - SEE RESULTS ***
************************************************************************************************************************
***             WIDEN YOUR WINDOW TO 120 CHARACTERS.  Use 'enscript -r -fCourier9' to print this document            ***
************************************************************************************************************************

---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./openmpi.caffa3d.cpld.lnx on a arch-openmpi-opt-intel-hlr-ext named hpa0678 with 128 processors, by gu08vomo Tue Mar  3 00:08:13 2015
Using Petsc Release Version 3.5.3, Jan, 31, 2015 

                         Max       Max/Min        Avg      Total 
Time (sec):           9.008e+01      1.00025   9.007e+01
Objects:              3.215e+03      1.00000   3.215e+03
Flops:                2.373e+10      1.02696   2.361e+10  3.022e+12
Flops/sec:            2.634e+08      1.02671   2.621e+08  3.355e+10
MPI Messages:         1.232e+05      6.18545   6.987e+04  8.943e+06
MPI Message Lengths:  9.556e+08      2.06082   1.237e+04  1.107e+11
MPI Reductions:       1.491e+04      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 3.3561e+01  37.3%  3.9813e+07   0.0%  2.149e+05   2.4%  5.702e+02        4.6%  2.590e+02   1.7% 
 1:        CPLD_SOL: 5.6514e+01  62.7%  3.0223e+12 100.0%  8.728e+06  97.6%  1.180e+04       95.4%  1.465e+04  98.3% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

ThreadCommRunKer      81 1.0 2.2925e+00221.8 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   1  0  0  0  0     0
VecNorm                1 1.0 4.6450e-01 1.0 2.07e+05 1.0 0.0e+00 0.0e+00 1.0e+00  1  0  0  0  0   1 67  0  0  0    57
VecScale               1 1.0 9.2983e-05 1.5 1.04e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0 33  0  0  0 142725
VecSet               676 1.0 3.8981e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecScatterBegin      736 1.0 2.0966e-02 3.6 0.00e+00 0.0 1.8e+05 1.8e+04 0.0e+00  0  0  2  3  0   0  0 84 63  0     0
VecScatterEnd        736 1.0 1.3496e+01105.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  4  0  0  0  0  10  0  0  0  0     0
VecNormalize           1 1.0 9.4175e-05 1.5 1.04e+05 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0 33  0  0  0 140918
MatAssemblyBegin      38 1.0 1.6660e-01 2.5 0.00e+00 0.0 2.6e+04 4.1e+04 7.6e+01  0  0  0  1  1   0  0 12 21 29     0
MatAssemblyEnd        38 1.0 1.4369e-01 1.1 0.00e+00 0.0 9.3e+02 1.2e+04 8.3e+01  0  0  0  0  1   0  0  0  0 32     0
MatZeroEntries        19 1.0 8.4610e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph            19 1.0 1.5828e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFReduceBegin         19 1.0 1.5753e-01 1.7 0.00e+00 0.0 6.1e+03 1.2e+05 0.0e+00  0  0  0  1  0   0  0  3 15  0     0
SFReduceEnd           19 1.0 1.3356e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0

--- Event Stage 1: CPLD_SOL

VecDot             10797 1.0 3.5414e+00 1.1 2.24e+09 1.0 0.0e+00 0.0e+00 1.1e+04  4  9  0  0 72   6  9  0  0 74 80921
VecMDot             2481 1.0 4.0228e+00 1.8 5.20e+08 1.0 0.0e+00 0.0e+00 2.5e+03  3  2  0  0 17   5  2  0  0 17 16548
VecNorm              680 1.0 1.2228e-01 1.4 1.37e+08 1.0 0.0e+00 0.0e+00 6.8e+02  0  1  0  0  5   0  1  0  0  5 142909
VecScale            4303 1.0 4.1210e-02 1.1 8.04e+07 1.1 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 243285
VecCopy              689 1.0 2.5491e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet              7391 1.0 1.2242e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY            13296 1.0 1.3723e+00 1.1 2.76e+09 1.0 0.0e+00 0.0e+00 0.0e+00  1 12  0  0  0   2 12  0  0  0 257131
VecAYPX             3660 1.0 8.1738e-01 1.2 2.55e+08 1.0 0.0e+00 0.0e+00 0.0e+00  1  1  0  0  0   1  1  0  0  0 39983
VecMAXPY            2503 1.0 4.0565e-01 1.2 6.45e+08 1.0 0.0e+00 0.0e+00 0.0e+00  0  3  0  0  0   1  3  0  0  0 203451
VecAssemblyBegin      23 1.0 1.4377e-02 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 6.3e+01  0  0  0  0  0   0  0  0  0  0     0
VecAssemblyEnd        23 1.0 3.2425e-05 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecPointwiseMult      33 1.0 5.8870e-03 1.2 1.18e+06 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0 25716
VecScatterBegin    11664 1.0 5.3135e-01 2.3 0.00e+00 0.0 8.5e+06 1.1e+04 0.0e+00  0  0 95 86  0   1  0 97 90  0     0
VecScatterEnd      11664 1.0 2.9502e+00 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  3  0  0  0  0   4  0  0  0  0     0
VecSetRandom           3 1.0 2.2552e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecNormalize         643 1.0 1.4068e-01 1.3 1.93e+08 1.0 0.0e+00 0.0e+00 6.4e+02  0  1  0  0  4   0  1  0  0  4 175859
MatMult             4318 1.0 1.3883e+01 1.1 7.40e+09 1.0 2.6e+06 2.9e+04 0.0e+00 15 31 29 69  0  24 31 30 72  0 67832
MatMultAdd          1830 1.0 2.2636e+00 1.7 6.18e+08 1.1 1.1e+06 2.0e+03 0.0e+00  2  3 13  2  0   3  3 13  2  0 34556
MatMultTranspose    1830 1.0 1.7695e+00 1.5 6.18e+08 1.1 1.1e+06 2.0e+03 0.0e+00  2  3 13  2  0   2  3 13  2  0 44205
MatSOR              5490 1.0 2.0926e+01 1.1 7.55e+09 1.0 3.6e+06 4.0e+03 0.0e+00 23 32 40 13  0  36 32 41 13  0 45947
MatConvert             3 1.0 1.4187e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatScale               9 1.0 1.7555e-02 1.2 4.09e+06 1.1 2.4e+03 1.3e+04 0.0e+00  0  0  0  0  0   0  0  0  0  0 29542
MatResidual         1830 1.0 3.5397e+00 1.2 1.88e+09 1.1 1.5e+06 1.3e+04 0.0e+00  4  8 16 17  0   6  8 17 18  0 67274
MatAssemblyBegin     113 1.0 5.5142e-01 4.3 0.00e+00 0.0 5.5e+03 2.8e+03 1.6e+02  0  0  0  0  1   0  0  0  0  1     0
MatAssemblyEnd       113 1.0 1.4980e+00 1.0 0.00e+00 0.0 4.0e+04 1.5e+03 1.8e+02  2  0  0  0  1   3  0  0  0  1     0
MatGetRow         430720 1.0 4.4299e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatCoarsen             3 1.0 1.1389e-01 1.0 0.00e+00 0.0 9.1e+04 4.4e+03 4.3e+01  0  0  1  0  0   0  0  1  0  0     0
MatView                6 1.2 1.6303e+001777.9 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00  1  0  0  0  0   2  0  0  0  0     0
MatAXPY                3 1.0 3.0613e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 6.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatMatMult             3 1.0 1.7952e-01 1.0 3.00e+06 1.1 1.5e+04 6.2e+03 4.8e+01  0  0  0  0  0   0  0  0  0  0  2120
MatMatMultSym          3 1.0 1.4599e-01 1.0 0.00e+00 0.0 1.3e+04 4.9e+03 4.2e+01  0  0  0  0  0   0  0  0  0  0     0
MatMatMultNum          3 1.0 3.3665e-02 1.0 3.00e+06 1.1 2.4e+03 1.3e+04 6.0e+00  0  0  0  0  0   0  0  0  0  0 11306
MatPtAP               57 1.0 5.3013e+00 1.0 8.50e+08 1.2 1.2e+05 6.1e+04 2.0e+02  6  4  1  7  1   9  4  1  7  1 20055
MatPtAPSymbolic        6 1.0 3.7654e-01 1.0 0.00e+00 0.0 2.8e+04 2.9e+04 4.2e+01  0  0  0  1  0   1  0  0  1  0     0
MatPtAPNumeric        57 1.0 4.9250e+00 1.0 8.50e+08 1.2 9.0e+04 7.1e+04 1.6e+02  5  4  1  6  1   9  4  1  6  1 21587
MatTrnMatMult          3 1.0 1.8678e+00 1.0 6.86e+07 1.3 1.9e+04 1.5e+05 5.7e+01  2  0  0  3  0   3  0  0  3  0  4509
MatTrnMatMultSym       3 1.0 1.1316e+00 1.0 0.00e+00 0.0 1.7e+04 6.0e+04 5.1e+01  1  0  0  1  0   2  0  0  1  0     0
MatTrnMatMultNum       3 1.0 7.3659e-01 1.0 6.86e+07 1.3 2.4e+03 7.7e+05 6.0e+00  1  0  0  2  0   1  0  0  2  0 11432
MatGetLocalMat        69 1.0 8.1363e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetBrAoCol         63 1.0 1.6193e-01 2.3 0.00e+00 0.0 6.5e+04 6.9e+04 0.0e+00  0  0  1  4  0   0  0  1  4  0     0
MatGetSymTrans        12 1.0 6.9423e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFSetGraph             3 1.0 3.2842e-03 4.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
SFBcastBegin          49 1.0 2.1371e-02 1.5 0.00e+00 0.0 9.1e+04 4.4e+03 0.0e+00  0  0  1  0  0   0  0  1  0  0     0
SFBcastEnd            49 1.0 3.3243e-02 3.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPGMRESOrthog       621 1.0 4.4305e+00 1.1 4.50e+09 1.0 0.0e+00 0.0e+00 1.1e+04  5 19  0  0 73   7 19  0  0 74 130047
KSPSetUp             136 1.0 6.1779e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 8.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve              19 1.0 5.4701e+01 1.0 2.37e+10 1.0 8.7e+06 1.2e+04 1.5e+04 61100 97 95 98  97100100100100 55133
PCGAMGgraph_AGG        3 1.0 1.8440e-01 1.0 3.08e+06 1.1 7.2e+03 6.4e+03 4.2e+01  0  0  0  0  0   0  0  0  0  0  2117
PCGAMGcoarse_AGG       3 1.0 2.0266e+00 1.0 6.86e+07 1.3 1.3e+05 2.7e+04 1.5e+02  2  0  1  3  1   4  0  1  3  1  4155
PCGAMGProl_AGG         3 1.0 1.1422e-01 1.0 0.00e+00 0.0 1.9e+04 8.0e+03 7.2e+01  0  0  0  0  0   0  0  0  0  0     0
PCGAMGPOpt_AGG         3 1.0 3.6229e-01 1.0 6.91e+07 1.0 3.9e+04 1.0e+04 1.6e+02  0  0  0  0  1   1  0  0  0  1 24276
PCSetUp               38 1.0 8.7040e+00 1.0 9.91e+08 1.1 3.2e+05 3.6e+04 7.4e+02 10  4  4 10  5  15  4  4 11  5 14238
PCSetUpOnBlocks      610 1.0 4.7803e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
PCApply              610 1.0 3.8941e+01 1.0 1.65e+10 1.0 8.1e+06 9.8e+03 2.4e+03 43 70 91 72 16  69 70 93 76 17 53976

--- Event Stage 2: Unknown

------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Vector   111            245     83646984     0
      Vector Scatter     5             10        10840     0
           Index Set    14             12       181536     0
   IS L to G Mapping     4              3      1045500     0
              Matrix     6             34     36248404     0
Star Forest Bipartite Graph    19             19        16568     0
       Krylov Solver     0              6       503176     0
      Preconditioner     0              6         6276     0

--- Event Stage 1: CPLD_SOL

              Vector  2806           2669     65491832     0
      Vector Scatter    27             21        22940     0
           Index Set    67             65       295500     0
              Matrix   106             78    133576640     0
      Matrix Coarsen     3              3         1932     0
   Matrix Null Space    19              0            0     0
Star Forest Bipartite Graph     3              3         2616     0
       Krylov Solver    10              4        91816     0
      Preconditioner    10              4         3792     0
         PetscRandom     3              3         1920     0
              Viewer     2              1          752     0

--- Event Stage 2: Unknown

========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 6.51836e-05
Average time for zero size MPI_Send(): 7.94418e-06
#PETSc Option Table entries:
-coupledsolve_ksp_converged_reason
-coupledsolve_ksp_gmres_modifiedgramschmidt
-coupledsolve_ksp_gmres_restart 100
-coupledsolve_ksp_norm_type unpreconditioned
-coupledsolve_ksp_type gmres
-coupledsolve_mg_coarse_sub_pc_type sor
-coupledsolve_mg_levels_ksp_rtol 1e-5
-coupledsolve_mg_levels_ksp_type richardson
-coupledsolve_pc_gamg_reuse_interpolation true
-coupledsolve_pc_type gamg
-log_summary
-on_error_abort
-options_left
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 sizeof(PetscInt) 4
Configure options: PETSC_ARCH=arch-openmpi-opt-intel-hlr-ext PETSC_DIR=/home/gu08vomo/soft/petsc/3.5.3 -prefix=/home/gu08vomo/soft/petsc/3.5.3/build/arch-openmpi-opt-intel-hlr-ext --with-blas-lapack-dir=/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64/ --with-mpi-dir=/shared/apps/openmpi/1.8.2_intel COPTFLAGS="-O3 -xHost" FOPTFLAGS="-O3 -xHost" CXXOPTFLAGS="-O3 -xHost" --with-debugging=0 --download-hypre --download-ml
-----------------------------------------
Libraries compiled on Sun Feb  1 16:09:22 2015 on hla0003 
Machine characteristics: Linux-3.0.101-0.40-default-x86_64-with-SuSE-11-x86_64
Using PETSc directory: /home/gu08vomo/soft/petsc/3.5.3
Using PETSc arch: arch-openmpi-opt-intel-hlr-ext
-----------------------------------------

Using C compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpicc  -fPIC -wd1572 -O3 -xHost  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /shared/apps/openmpi/1.8.2_intel/bin/mpif90  -fPIC -O3 -xHost   ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/include -I/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/include -I/shared/apps/openmpi/1.8.2_intel/include
-----------------------------------------

Using C linker: /shared/apps/openmpi/1.8.2_intel/bin/mpicc
Using Fortran linker: /shared/apps/openmpi/1.8.2_intel/bin/mpif90
Using libraries: -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lpetsc -Wl,-rpath,/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -L/home/gu08vomo/soft/petsc/3.5.3/arch-openmpi-opt-intel-hlr-ext/lib -lHYPRE -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -lmpi_cxx -lml -lmpi_cxx -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -lpthread -lm -lX11 -lpthread -lssl -lcrypto -lmpi_usempi_ignore_tkr -lmpi_mpifh -lifport -lifcore -lm -lmpi_cxx -ldl -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -lmpi -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -limf -lsvml -lirng -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lpthread -lirc_s -Wl,-rpath,/shared/apps/openmpi/1.8.2_intel/lib -L/shared/apps/openmpi/1.8.2_intel/lib -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -L/shared/apps/gcc/4.8.3/lib/gcc/x86_64-unknown-linux-gnu/4.8.3 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib64 -L/shared/apps/gcc/4.8.3/lib64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/compiler/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/ipp/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -L/shared/apps/intel/2015/composer_xe_2015.0.090/mkl/lib/intel64 -Wl,-rpath,/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -L/shared/apps/intel/2015/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.4 -Wl,-rpath,/shared/apps/gcc/4.8.3/lib -L/shared/apps/gcc/4.8.3/lib -ldl  
-----------------------------------------

#PETSc Option Table entries:
-coupledsolve_ksp_converged_reason
-coupledsolve_ksp_gmres_modifiedgramschmidt
-coupledsolve_ksp_gmres_restart 100
-coupledsolve_ksp_norm_type unpreconditioned
-coupledsolve_ksp_type gmres
-coupledsolve_mg_coarse_sub_pc_type sor
-coupledsolve_mg_levels_ksp_rtol 1e-5
-coupledsolve_mg_levels_ksp_type richardson
-coupledsolve_pc_gamg_reuse_interpolation true
-coupledsolve_pc_type gamg
-log_summary
-on_error_abort
-options_left
#End of PETSc Option Table entries
There are no unused options.


More information about the petsc-users mailing list