[petsc-users] GAMG issue

John Mousel john.mousel at gmail.com
Thu Mar 15 10:13:10 CDT 2012


Mark,

The changes pulled through this morning. I've run it with the options

-ksp_type bcgsl -pc_type gamg -pc_gamg_sym_graph -ksp_diagonal_scale
-ksp_diagonal_scale_fix -pc_mg_levels 4 -mg_levels_ksp_type richardson
-mg_levels_pc_type sor -mg_coarse_ksp_type preonly -mg_coarse_pc_type sor
-mg_coarse_pc_sor_its 8

and it converges in the true residual, but it's not converging as fast as
anticpated. The matrix arises from a non-symmetric discretization of the
Poisson equation. The solve takes GAMG 114 iterations, whereas ML takes 24
iterations, BoomerAMG takes 22 iterations, and -ksp_type bcgsl -pc_type
bjacobi -sub_pc_type ilu -sub_pc_factor_levels 4 takes around 170. I've
attached the -ksp_view results for ML,GAMG, and HYPRE. I've attempted to
make all the options the same on all levels for ML and GAMG.

Any thoughts?

John


On Wed, Mar 14, 2012 at 6:04 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:

> Humm, I see it with hg view (appended).
>
> Satish, my main repo looks hosed.  I see this:
>
> ~/Codes/petsc-dev>hg update
> abort: crosses branches (merge branches or use --clean to discard changes)
> ~/Codes/petsc-dev>hg merge
> abort: branch 'default' has 3 heads - please merge with an explicit rev
> (run 'hg heads .' to see heads)
> ~/Codes/petsc-dev>hg heads
> changeset:   22496:8e2a98268179
> tag:         tip
> user:        Barry Smith <bsmith at mcs.anl.gov>
> date:        Wed Mar 14 16:42:25 2012 -0500
> files:       src/vec/is/interface/f90-custom/zindexf90.c
> src/vec/vec/interface/f90-custom/zvectorf90.c
> description:
> undoing manually changes I put in because Satish had a better fix
>
>
> changeset:   22492:bda4df63072d
> user:        Mark F. Adams <mark.adams at columbia.edu>
> date:        Wed Mar 14 17:39:52 2012 -0400
> files:       src/ksp/pc/impls/gamg/tools.c
> description:
> fix for unsymmetric matrices.
>
>
> changeset:   22469:b063baf366e4
> user:        Mark F. Adams <mark.adams at columbia.edu>
> date:        Wed Mar 14 14:22:28 2012 -0400
> files:       src/ksp/pc/impls/gamg/tools.c
> description:
> added fix for preallocation for unsymetric matrices.
>
> Mark
>
> my 'hg view' on my merge repo:
>
> Revision: 22492
> Branch: default
> Author: Mark F. Adams <mark.adams at columbia.edu>  2012-03-14 17:39:52
> Committer: Mark F. Adams <mark.adams at columbia.edu>  2012-03-14 17:39:52
> Tags: tip
> Parent: 22491:451bbbd291c2 (Small fixes to the BT linesearch)
>
>     fix for unsymmetric matrices.
>
>
> ------------------------ src/ksp/pc/impls/gamg/tools.c
> ------------------------
> @@ -103,7 +103,7 @@
>    PetscErrorCode ierr;
>    PetscInt       Istart,Iend,Ii,jj,ncols,nnz0,nnz1, NN, MM, nloc;
>    PetscMPIInt    mype, npe;
> -  Mat            Gmat = *a_Gmat, tGmat;
> +  Mat            Gmat = *a_Gmat, tGmat, matTrans;
>    MPI_Comm       wcomm = ((PetscObject)Gmat)->comm;
>    const PetscScalar *vals;
>    const PetscInt *idx;
> @@ -127,6 +127,10 @@
>    ierr = MatDiagonalScale( Gmat, diag, diag ); CHKERRQ(ierr);
>    ierr = VecDestroy( &diag );           CHKERRQ(ierr);
>
> +  if( symm ) {
> +    ierr = MatTranspose( Gmat, MAT_INITIAL_MATRIX, &matTrans );
>  CHKERRQ(ierr);
> +  }
> +
>    /* filter - dup zeros out matrix */
>    ierr = PetscMalloc( nloc*sizeof(PetscInt), &d_nnz ); CHKERRQ(ierr);
>    ierr = PetscMalloc( nloc*sizeof(PetscInt), &o_nnz ); CHKERRQ(ierr);
> @@ -135,6 +139,12 @@
>      d_nnz[jj] = ncols;
>      o_nnz[jj] = ncols;
>      ierr = MatRestoreRow(Gmat,Ii,&ncols,PETSC_NULL,PETSC_NULL);
> CHKERRQ(ierr);
> +    if( symm ) {
> +      ierr = MatGetRow(matTrans,Ii,&ncols,PETSC_NULL,PETSC_NULL);
> CHKERRQ(ierr);
> +      d_nnz[jj] += ncols;
> +      o_nnz[jj] += ncols;
> +      ierr = MatRestoreRow(matTrans,Ii,&ncols,PETSC_NULL,PETSC_NULL);
> CHKERRQ(ierr);
> +    }
>      if( d_nnz[jj] > nloc ) d_nnz[jj] = nloc;
>      if( o_nnz[jj] > (MM-nloc) ) o_nnz[jj] = MM - nloc;
>    }
> @@ -142,6 +152,9 @@
>    CHKERRQ(ierr);
>    ierr = PetscFree( d_nnz ); CHKERRQ(ierr);
>    ierr = PetscFree( o_nnz ); CHKERRQ(ierr);
> +  if( symm ) {
> +    ierr = MatDestroy( &matTrans );  CHKERRQ(ierr);
> +  }
>
>
>
>
> On Mar 14, 2012, at 5:53 PM, John Mousel wrote:
>
> Mark,
>
> No change. Can you give me the location that you patched so I can check to
> make sure it pulled?
> I don't see it on the petsc-dev change log.
>
> John
>
> On Wed, Mar 14, 2012 at 4:40 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>
>> John, I've committed these changes, give a try.
>>
>> Mark
>>
>> On Mar 14, 2012, at 3:46 PM, Satish Balay wrote:
>>
>> > This is the usual merge [with uncommited changes] issue.
>> >
>> > You could use 'hg shelf' extension to shelve your local changes and
>> > then do a merge [as Sean would suggest] - or do the merge in a
>> > separate/clean clone [I normally do this..]
>> >
>> > i.e
>> > cd ~/Codes
>> > hg clone petsc-dev petsc-dev-merge
>> > cd petsc-dev-merge
>> > hg pull ssh://petsc@petsc.cs.iit.edu//hg/petsc/petsc-dev   #just to be
>> sure, look for latest chagnes before merge..
>> > hg merge
>> > hg commit
>> > hg push ssh://petsc@petsc.cs.iit.edu//hg/petsc/petsc-dev
>> >
>> > [now update your petsc-dev to latest]
>> > cd ~/Codes/petsc-dev
>> > hg pull
>> > hg update
>> >
>> > Satish
>> >
>> > On Wed, 14 Mar 2012, Mark F. Adams wrote:
>> >
>> >> Great, that seems to work.
>> >>
>> >> I did a 'hg commit tools.c'
>> >>
>> >> and I want to push this file only.  I guess its the only thing in the
>> change set so 'hg push' should be fine.  But I see this:
>> >>
>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg update
>> >> abort: crosses branches (merge branches or use --clean to discard
>> changes)
>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg merge
>> >> abort: outstanding uncommitted changes (use 'hg status' to list
>> changes)
>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg status
>> >> M include/petscmat.h
>> >> M include/private/matimpl.h
>> >> M src/ksp/pc/impls/gamg/agg.c
>> >> M src/ksp/pc/impls/gamg/gamg.c
>> >> M src/ksp/pc/impls/gamg/gamg.h
>> >> M src/ksp/pc/impls/gamg/geo.c
>> >> M src/mat/coarsen/coarsen.c
>> >> M src/mat/coarsen/impls/hem/hem.c
>> >> M src/mat/coarsen/impls/mis/mis.c
>> >>
>> >> Am I ready to do a push?
>> >>
>> >> Thanks,
>> >> Mark
>> >>
>> >> On Mar 14, 2012, at 2:44 PM, Satish Balay wrote:
>> >>
>> >>> If commit is the last hg operation that you've done - then 'hg
>> rollback' would undo this commit.
>> >>>
>> >>> Satish
>> >>>
>> >>> On Wed, 14 Mar 2012, Mark F. Adams wrote:
>> >>>
>> >>>> Damn, I'm not preallocating the graph perfectly for unsymmetric
>> matrices and PETSc now dies on this.
>> >>>>
>> >>>> I have a fix but I committed it with other changes that I do not
>> want to commit.  The changes are all in one file so I should be able to
>> just commit this file.
>> >>>>
>> >>>> Anyone know how to delete a commit?
>> >>>>
>> >>>> I've tried:
>> >>>>
>> >>>> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg strip 22487:26ffb9eef17f
>> >>>> hg: unknown command 'strip'
>> >>>> 'strip' is provided by the following extension:
>> >>>>
>> >>>>   mq  manage a stack of patches
>> >>>>
>> >>>> use "hg help extensions" for information on enabling extensions
>> >>>>
>> >>>> But have not figured out how to load extensions.
>> >>>>
>> >>>> Mark
>> >>>>
>> >>>> On Mar 14, 2012, at 12:54 PM, John Mousel wrote:
>> >>>>
>> >>>>> Mark,
>> >>>>>
>> >>>>> I have a non-symmetric matrix. I am running with the following
>> options.
>> >>>>>
>> >>>>> -pc_type gamg -pc_gamg_sym_graph -ksp_monitor_true_residual
>> >>>>>
>> >>>>> and with the inclusion of -pc_gamg_sym_graph, I get a new malloc
>> error:
>> >>>>>
>> >>>>>
>> >>>>> 0]PETSC ERROR: --------------------- Error Message
>> ------------------------------------
>> >>>>> [0]PETSC ERROR: Argument out of range!
>> >>>>> [0]PETSC ERROR: New nonzero at (5150,9319) caused a malloc!
>> >>>>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >>>>> [0]PETSC ERROR: Petsc Development HG revision:
>> 587b25035091aaa309c87c90ac64c13408ecf34e  HG Date: Wed Mar 14 09:22:54 2012
>> -0500
>> >>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>> >>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>> >>>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>> >>>>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >>>>> [0]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
>> wv.iihr.uiowa.edu by jmousel Wed Mar 14 11:51:35 2012
>> >>>>> [0]PETSC ERROR: Libraries linked from
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
>> >>>>> [0]PETSC ERROR: Configure run at Wed Mar 14 09:46:39 2012
>> >>>>> [0]PETSC ERROR: Configure options --download-blacs=1
>> --download-hypre=1 --download-metis=1 --download-ml=1 --download-mpich=1
>> --download-parmetis=1 --download-scalapack=1
>> --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t --with-cc=gcc
>> --with-cmake=/usr/local/bin/cmake --with-cxx=g++ --with-fc=ifort
>> PETSC_ARCH=linux-debug
>> >>>>> [0]PETSC ERROR:
>> ------------------------------------------------------------------------
>> >>>>> [0]PETSC ERROR: MatSetValues_MPIAIJ() line 506 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
>> >>>>> [0]PETSC ERROR: MatSetValues() line 1141 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/mat/interface/matrix.c
>> >>>>> [0]PETSC ERROR: scaleFilterGraph() line 155 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/tools.c
>> >>>>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 865 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
>> >>>>> [0]PETSC ERROR: PCSetUp_GAMG() line 516 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
>> >>>>> [0]PETSC ERROR: PCSetUp() line 832 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
>> >>>>> [0]PETSC ERROR: KSPSetUp() line 261 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
>> >>>>> [0]PETSC ERROR: KSPSolve() line 385 in
>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
>> >>>>>
>> >>>>>
>> >>>>> John
>> >>>>>
>> >>>>>
>> >>>>> On Wed, Mar 14, 2012 at 11:27 AM, Mark F. Adams <
>> mark.adams at columbia.edu> wrote:
>> >>>>>
>> >>>>> On Mar 14, 2012, at 11:56 AM, John Mousel wrote:
>> >>>>>
>> >>>>>> Mark,
>> >>>>>>
>> >>>>>> The matrix is asymmetric. Does this require the setting of an
>> option?
>> >>>>>
>> >>>>> Yes:  -pc_gamg_sym_graph
>> >>>>>
>> >>>>> Mark
>> >>>>>
>> >>>>>> I pulled petsc-dev this morning, so I should have (at least close
>> to) the latest code.
>> >>>>>>
>> >>>>>> John
>> >>>>>>
>> >>>>>> On Wed, Mar 14, 2012 at 10:54 AM, Mark F. Adams <
>> mark.adams at columbia.edu> wrote:
>> >>>>>>
>> >>>>>> On Mar 14, 2012, at 11:08 AM, John Mousel wrote:
>> >>>>>>
>> >>>>>>> I'm getting the following error when using GAMG.
>> >>>>>>>
>> >>>>>>> petsc-dev/src/ksp/pc/impls/gamg/agg.c:508: smoothAggs: Assertion
>> `sgid==-1' failed.
>> >>>>>>
>> >>>>>> Is it possible that your matrix is structurally asymmetric?
>> >>>>>>
>> >>>>>> This code is evolving fast and so you will need to move to the dev
>> version if you are not already using it. (I think I fixed a bug that hit
>> this assert).
>> >>>>>>
>> >>>>>>>
>> >>>>>>> When I try to alter the type of aggregation at the command line
>> using -pc_gamg_type pa, I'm getting
>> >>>>>>>
>> >>>>>>> [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error
>> Message ------------------------------------
>> >>>>>>> [1]PETSC ERROR: Unknown type. Check for miss-spelling or missing
>> external package needed for type:
>> >>>>>>> see
>> http://www.mcs.anl.gov/petsc/documentation/installation.html#external!
>> >>>>>>> [1]PETSC ERROR: Unknown GAMG type pa given!
>> >>>>>>>
>> >>>>>>> Has there been a change in the aggregation options? I just pulled
>> petsc-dev this morning.
>> >>>>>>>
>> >>>>>>
>> >>>>>> Yes, this option is gone now.  You can use -pc_gamg_type agg for
>> now.
>> >>>>>>
>> >>>>>> Mark
>> >>>>>>
>> >>>>>>> John
>> >>>>>>
>> >>>>>>
>> >>>>>
>> >>>>>
>> >>>>
>> >>>>
>> >>>
>> >>>
>> >>
>> >>
>> >
>> >
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120315/1d0e6fd9/attachment-0001.htm>
-------------- next part --------------
  Residual norms for pres_ solve.
  0 KSP Residual norm 4.059775183487e+03 
  2 KSP Residual norm 5.981316947830e+02 
  4 KSP Residual norm 4.442881917561e+02 
  6 KSP Residual norm 3.949125263925e+02 
  8 KSP Residual norm 7.982748331499e+03 
 10 KSP Residual norm 2.543280382461e+02 
 12 KSP Residual norm 2.185515850048e+02 
 14 KSP Residual norm 1.802232035923e+02 
 16 KSP Residual norm 1.583506303369e+02 
 18 KSP Residual norm 1.599584824650e+02 
 20 KSP Residual norm 2.888585316347e+02 
 22 KSP Residual norm 1.561026662942e+02 
 24 KSP Residual norm 1.771271404110e+02 
 26 KSP Residual norm 1.988800994293e+02 
 28 KSP Residual norm 2.199783177060e+02 
 30 KSP Residual norm 2.216968849511e+02 
 32 KSP Residual norm 2.072764401471e+02 
 34 KSP Residual norm 1.860542194649e+02 
 36 KSP Residual norm 1.659181700778e+02 
 38 KSP Residual norm 1.407464754713e+02 
 40 KSP Residual norm 9.615564181863e+01 
 42 KSP Residual norm 6.703321281230e+01 
 44 KSP Residual norm 3.561383031838e+01 
 46 KSP Residual norm 1.856503785272e+01 
 48 KSP Residual norm 1.055079242233e+01 
 50 KSP Residual norm 6.356112490758e+00 
 52 KSP Residual norm 3.425262242318e+00 
 54 KSP Residual norm 3.077183029782e+00 
 56 KSP Residual norm 2.176818497656e+00 
 58 KSP Residual norm 1.078873696603e+00 
 60 KSP Residual norm 5.918603863161e-01 
 62 KSP Residual norm 3.752380791312e-01 
 64 KSP Residual norm 1.412075511011e-01 
 66 KSP Residual norm 2.671083368701e-02 
 68 KSP Residual norm 4.773262749061e-03 
 70 KSP Residual norm 7.963615707246e-03 
 72 KSP Residual norm 9.995350897604e-03 
 74 KSP Residual norm 9.622695628075e-03 
 76 KSP Residual norm 2.242543288576e-03 
 78 KSP Residual norm 3.841006934453e-03 
 80 KSP Residual norm 8.987952547562e-03 
 82 KSP Residual norm 9.558153629986e-04 
 84 KSP Residual norm 2.646555390162e-04 
 86 KSP Residual norm 1.186132284573e-04 
 88 KSP Residual norm 3.628058439929e-05 
 90 KSP Residual norm 1.793225729106e-05 
 92 KSP Residual norm 4.744894233256e-06 
 94 KSP Residual norm 2.508223070292e-06 
 96 KSP Residual norm 7.000610692450e-07 
 98 KSP Residual norm 3.780248712706e-07 
100 KSP Residual norm 2.502747308880e-07 
102 KSP Residual norm 1.378104516997e-07 
104 KSP Residual norm 1.109001062835e-07 
106 KSP Residual norm 6.202415071048e-08 
108 KSP Residual norm 2.871737213539e-08 
110 KSP Residual norm 1.871854343209e-08 
112 KSP Residual norm 1.163299109304e-08 
114 KSP Residual norm 9.392404490548e-09 
KSP Object:(pres_) 4 MPI processes
  type: bcgsl
    BCGSL: Ell = 2
    BCGSL: Delta = 0
  maximum iterations=5000
  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000
  left preconditioning
  diagonally scaled system
  has attached null space
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(pres_) 4 MPI processes
  type: gamg
    MG: type is MULTIPLICATIVE, levels=4 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (pres_mg_coarse_)     4 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (pres_mg_coarse_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 8, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=139, cols=139
        total: nonzeros=679, allocated nonzeros=679
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (pres_mg_levels_1_)     4 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (pres_mg_levels_1_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=895, cols=895
        total: nonzeros=4941, allocated nonzeros=4941
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (pres_mg_levels_2_)     4 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (pres_mg_levels_2_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=7604, cols=7604
        total: nonzeros=49366, allocated nonzeros=49366
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (pres_mg_levels_3_)     4 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      has attached null space
      using nonzero initial guess
      using NONE norm type for convergence test
    PC Object:    (pres_mg_levels_3_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=58507, cols=58507
        total: nonzeros=383336, allocated nonzeros=675924
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   4 MPI processes
    type: mpiaij
    rows=58507, cols=58507
    total: nonzeros=383336, allocated nonzeros=675924
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
-------------- next part --------------
  Residual norms for pres_ solve.
  0 KSP Residual norm 1.083611831517e+04 
  2 KSP Residual norm 3.063605374040e+03 
  4 KSP Residual norm 1.926278721105e+03 
  6 KSP Residual norm 2.562396342913e+02 
  8 KSP Residual norm 1.472308108162e+01 
 10 KSP Residual norm 2.062803551544e+00 
 12 KSP Residual norm 1.101533503605e-01 
 14 KSP Residual norm 1.007101075709e-02 
 16 KSP Residual norm 9.417367089322e-04 
 18 KSP Residual norm 1.016475488904e-04 
 20 KSP Residual norm 7.187713879600e-06 
 22 KSP Residual norm 1.458531748697e-07 
 24 KSP Residual norm 6.829597900465e-09 
KSP Object:(pres_) 4 MPI processes
  type: bcgsl
    BCGSL: Ell = 2
    BCGSL: Delta = 0
  maximum iterations=5000
  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000
  left preconditioning
  has attached null space
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(pres_) 4 MPI processes
  type: ml
    MG: type is MULTIPLICATIVE, levels=4 cycles=v
      Cycles per PCApply=1
      Using Galerkin computed coarse grid matrices
  Coarse grid solver -- level -------------------------------
    KSP Object:    (pres_mg_coarse_)     4 MPI processes
      type: preonly
      maximum iterations=1, initial guess is zero
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using NONE norm type for convergence test
    PC Object:    (pres_mg_coarse_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 8, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=35, cols=35
        total: nonzeros=479, allocated nonzeros=479
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Down solver (pre-smoother) on level 1 -------------------------------
    KSP Object:    (pres_mg_levels_1_)     4 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (pres_mg_levels_1_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=532, cols=532
        total: nonzeros=8539, allocated nonzeros=8539
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 2 -------------------------------
    KSP Object:    (pres_mg_levels_2_)     4 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (pres_mg_levels_2_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=8439, cols=8439
        total: nonzeros=110975, allocated nonzeros=110975
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  Down solver (pre-smoother) on level 3 -------------------------------
    KSP Object:    (pres_mg_levels_3_)     4 MPI processes
      type: richardson
        Richardson: damping factor=1
      maximum iterations=1
      tolerances:  relative=1e-05, absolute=1e-50, divergence=10000
      left preconditioning
      has attached null space
      using nonzero initial guess
      using PRECONDITIONED norm type for convergence test
    PC Object:    (pres_mg_levels_3_)     4 MPI processes
      type: sor
        SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
      linear system matrix = precond matrix:
      Matrix Object:       4 MPI processes
        type: mpiaij
        rows=58507, cols=58507
        total: nonzeros=383336, allocated nonzeros=675924
        total number of mallocs used during MatSetValues calls =0
          not using I-node (on process 0) routines
  Up solver (post-smoother) same as down solver (pre-smoother)
  linear system matrix = precond matrix:
  Matrix Object:   4 MPI processes
    type: mpiaij
    rows=58507, cols=58507
    total: nonzeros=383336, allocated nonzeros=675924
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines
-------------- next part --------------
    Residual norms for pres_ solve.
  0 KSP preconditioned resid norm 1.767885052835e+04 true resid norm 5.490506953240e+02 ||r(i)||/||b|| 1.173737492770e-01
  2 KSP preconditioned resid norm 2.066096994623e+03 true resid norm 2.745377688272e+03 ||r(i)||/||b|| 5.868952998296e-01
  4 KSP preconditioned resid norm 1.106321516621e+02 true resid norm 1.033500617653e+02 ||r(i)||/||b|| 2.209374169035e-02
  6 KSP preconditioned resid norm 2.570916205201e+01 true resid norm 2.235586678502e+01 ||r(i)||/||b|| 4.779143210710e-03
  8 KSP preconditioned resid norm 1.619194902332e+00 true resid norm 3.773126954763e-01 ||r(i)||/||b|| 8.066032170621e-05
 10 KSP preconditioned resid norm 6.319088912041e-01 true resid norm 1.630872902347e-01 ||r(i)||/||b|| 3.486411523980e-05
 12 KSP preconditioned resid norm 3.808406632980e-02 true resid norm 8.627569048343e-03 ||r(i)||/||b|| 1.844365438336e-06
 14 KSP preconditioned resid norm 2.683282198467e-03 true resid norm 5.026895110387e-04 ||r(i)||/||b|| 1.074628502164e-07
 16 KSP preconditioned resid norm 1.517361222053e-05 true resid norm 9.443042302108e-06 ||r(i)||/||b|| 2.018693882038e-09
 18 KSP preconditioned resid norm 3.351216887604e-06 true resid norm 1.563343028855e-06 ||r(i)||/||b|| 3.342048999581e-10
 20 KSP preconditioned resid norm 1.331870500595e-07 true resid norm 3.754585860708e-08 ||r(i)||/||b|| 8.026395799272e-12
 22 KSP preconditioned resid norm 5.782696971967e-09 true resid norm 3.780795199311e-09 ||r(i)||/||b|| 8.082425021418e-13
KSP Object:(pres_) 4 MPI processes
  type: bcgsl
    BCGSL: Ell = 2
    BCGSL: Delta = 0
  maximum iterations=5000
  tolerances:  relative=1e-12, absolute=1e-50, divergence=10000
  left preconditioning
  has attached null space
  using nonzero initial guess
  using PRECONDITIONED norm type for convergence test
PC Object:(pres_) 4 MPI processes
  type: hypre
    HYPRE BoomerAMG preconditioning
    HYPRE BoomerAMG: Cycle type V
    HYPRE BoomerAMG: Maximum number of levels 25
    HYPRE BoomerAMG: Maximum number of iterations PER hypre call 1
    HYPRE BoomerAMG: Convergence tolerance PER hypre call 0
    HYPRE BoomerAMG: Threshold for strong coupling 0.25
    HYPRE BoomerAMG: Interpolation truncation factor 0
    HYPRE BoomerAMG: Interpolation: max elements per row 0
    HYPRE BoomerAMG: Number of levels of aggressive coarsening 0
    HYPRE BoomerAMG: Number of paths for aggressive coarsening 1
    HYPRE BoomerAMG: Maximum row sums 0.9
    HYPRE BoomerAMG: Sweeps down         1
    HYPRE BoomerAMG: Sweeps up           1
    HYPRE BoomerAMG: Sweeps on coarse    4
    HYPRE BoomerAMG: Relax down          symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax up            symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax on coarse     symmetric-SOR/Jacobi
    HYPRE BoomerAMG: Relax weight  (all)      1
    HYPRE BoomerAMG: Outer relax weight (all) 1
    HYPRE BoomerAMG: Using CF-relaxation
    HYPRE BoomerAMG: Measure type        local
    HYPRE BoomerAMG: Coarsen type        PMIS
    HYPRE BoomerAMG: Interpolation type  classical
  linear system matrix = precond matrix:
  Matrix Object:   4 MPI processes
    type: mpiaij
    rows=58507, cols=58507
    total: nonzeros=383336, allocated nonzeros=675924
    total number of mallocs used during MatSetValues calls =0
      not using I-node (on process 0) routines


More information about the petsc-users mailing list