[petsc-users] GAMG issue
John Mousel
john.mousel at gmail.com
Thu Mar 15 11:21:20 CDT 2012
Mark,
I ran with those options removed (see the run options listed below). Things
actually got slightly worse. Now it's up to 142 iterations. I have attached
the ksp_view output.
-ksp_type bcgsl -pc_type gamg -pc_gamg_sym_graph -ksp_diagonal_scale
-ksp_diagonal_scale_fix -mg_levels_ksp_type richardson -mg_levels_pc_type
sor -pc_gamg_verbose 1
John
On Thu, Mar 15, 2012 at 10:55 AM, Mark F. Adams <mark.adams at columbia.edu>wrote:
> John, can you run again with: -pc_gamg_verbose 1
>
> And I would not use: -pc_mg_levels 4 -mg_coarse_ksp_type preonly
> -mg_coarse_pc_type sor -mg_coarse_pc_sor_its 8
>
> 1) I think -mg_coarse_ksp_type preonly and -mg_coarse_pc_sor_its 8 do not
> do what you think. I think this is the same as 1 iteration. I think you
> want 'richardson' not 'preonly'.
>
> 2) Why are you using sor as the coarse solver? If your problem is
> singular then you want to use as many levels as possible to get the coarse
> grid to be tiny. I'm pretty sure HYPRE ignores the coarse solver
> parameters. But ML uses them and it is converging well.
>
> 3) I would not specify the number of levels. GAMG, and I think the rest,
> have internal logic for stopping a the right level. If the coarse level is
> large and you use just 8 iterations of sor then convergence will suffer.
>
> Mark
>
> On Mar 15, 2012, at 11:13 AM, John Mousel wrote:
>
> Mark,
>
> The changes pulled through this morning. I've run it with the options
>
> -ksp_type bcgsl -pc_type gamg -pc_gamg_sym_graph -ksp_diagonal_scale
> -ksp_diagonal_scale_fix -pc_mg_levels 4 -mg_levels_ksp_type richardson
> -mg_levels_pc_type sor -mg_coarse_ksp_type preonly -mg_coarse_pc_type sor
> -mg_coarse_pc_sor_its 8
>
> and it converges in the true residual, but it's not converging as fast as
> anticpated. The matrix arises from a non-symmetric discretization of the
> Poisson equation. The solve takes GAMG 114 iterations, whereas ML takes 24
> iterations, BoomerAMG takes 22 iterations, and -ksp_type bcgsl -pc_type
> bjacobi -sub_pc_type ilu -sub_pc_factor_levels 4 takes around 170. I've
> attached the -ksp_view results for ML,GAMG, and HYPRE. I've attempted to
> make all the options the same on all levels for ML and GAMG.
>
> Any thoughts?
>
> John
>
>
> On Wed, Mar 14, 2012 at 6:04 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>
>> Humm, I see it with hg view (appended).
>>
>> Satish, my main repo looks hosed. I see this:
>>
>> ~/Codes/petsc-dev>hg update
>> abort: crosses branches (merge branches or use --clean to discard changes)
>> ~/Codes/petsc-dev>hg merge
>> abort: branch 'default' has 3 heads - please merge with an explicit rev
>> (run 'hg heads .' to see heads)
>> ~/Codes/petsc-dev>hg heads
>> changeset: 22496:8e2a98268179
>> tag: tip
>> user: Barry Smith <bsmith at mcs.anl.gov>
>> date: Wed Mar 14 16:42:25 2012 -0500
>> files: src/vec/is/interface/f90-custom/zindexf90.c
>> src/vec/vec/interface/f90-custom/zvectorf90.c
>> description:
>> undoing manually changes I put in because Satish had a better fix
>>
>>
>> changeset: 22492:bda4df63072d
>> user: Mark F. Adams <mark.adams at columbia.edu>
>> date: Wed Mar 14 17:39:52 2012 -0400
>> files: src/ksp/pc/impls/gamg/tools.c
>> description:
>> fix for unsymmetric matrices.
>>
>>
>> changeset: 22469:b063baf366e4
>> user: Mark F. Adams <mark.adams at columbia.edu>
>> date: Wed Mar 14 14:22:28 2012 -0400
>> files: src/ksp/pc/impls/gamg/tools.c
>> description:
>> added fix for preallocation for unsymetric matrices.
>>
>> Mark
>>
>> my 'hg view' on my merge repo:
>>
>> Revision: 22492
>> Branch: default
>> Author: Mark F. Adams <mark.adams at columbia.edu> 2012-03-14 17:39:52
>> Committer: Mark F. Adams <mark.adams at columbia.edu> 2012-03-14 17:39:52
>> Tags: tip
>> Parent: 22491:451bbbd291c2 (Small fixes to the BT linesearch)
>>
>> fix for unsymmetric matrices.
>>
>>
>> ------------------------ src/ksp/pc/impls/gamg/tools.c
>> ------------------------
>> @@ -103,7 +103,7 @@
>> PetscErrorCode ierr;
>> PetscInt Istart,Iend,Ii,jj,ncols,nnz0,nnz1, NN, MM, nloc;
>> PetscMPIInt mype, npe;
>> - Mat Gmat = *a_Gmat, tGmat;
>> + Mat Gmat = *a_Gmat, tGmat, matTrans;
>> MPI_Comm wcomm = ((PetscObject)Gmat)->comm;
>> const PetscScalar *vals;
>> const PetscInt *idx;
>> @@ -127,6 +127,10 @@
>> ierr = MatDiagonalScale( Gmat, diag, diag ); CHKERRQ(ierr);
>> ierr = VecDestroy( &diag ); CHKERRQ(ierr);
>>
>> + if( symm ) {
>> + ierr = MatTranspose( Gmat, MAT_INITIAL_MATRIX, &matTrans );
>> CHKERRQ(ierr);
>> + }
>> +
>> /* filter - dup zeros out matrix */
>> ierr = PetscMalloc( nloc*sizeof(PetscInt), &d_nnz ); CHKERRQ(ierr);
>> ierr = PetscMalloc( nloc*sizeof(PetscInt), &o_nnz ); CHKERRQ(ierr);
>> @@ -135,6 +139,12 @@
>> d_nnz[jj] = ncols;
>> o_nnz[jj] = ncols;
>> ierr = MatRestoreRow(Gmat,Ii,&ncols,PETSC_NULL,PETSC_NULL);
>> CHKERRQ(ierr);
>> + if( symm ) {
>> + ierr = MatGetRow(matTrans,Ii,&ncols,PETSC_NULL,PETSC_NULL);
>> CHKERRQ(ierr);
>> + d_nnz[jj] += ncols;
>> + o_nnz[jj] += ncols;
>> + ierr = MatRestoreRow(matTrans,Ii,&ncols,PETSC_NULL,PETSC_NULL);
>> CHKERRQ(ierr);
>> + }
>> if( d_nnz[jj] > nloc ) d_nnz[jj] = nloc;
>> if( o_nnz[jj] > (MM-nloc) ) o_nnz[jj] = MM - nloc;
>> }
>> @@ -142,6 +152,9 @@
>> CHKERRQ(ierr);
>> ierr = PetscFree( d_nnz ); CHKERRQ(ierr);
>> ierr = PetscFree( o_nnz ); CHKERRQ(ierr);
>> + if( symm ) {
>> + ierr = MatDestroy( &matTrans ); CHKERRQ(ierr);
>> + }
>>
>>
>>
>>
>> On Mar 14, 2012, at 5:53 PM, John Mousel wrote:
>>
>> Mark,
>>
>> No change. Can you give me the location that you patched so I can check
>> to make sure it pulled?
>> I don't see it on the petsc-dev change log.
>>
>> John
>>
>> On Wed, Mar 14, 2012 at 4:40 PM, Mark F. Adams <mark.adams at columbia.edu>wrote:
>>
>>> John, I've committed these changes, give a try.
>>>
>>> Mark
>>>
>>> On Mar 14, 2012, at 3:46 PM, Satish Balay wrote:
>>>
>>> > This is the usual merge [with uncommited changes] issue.
>>> >
>>> > You could use 'hg shelf' extension to shelve your local changes and
>>> > then do a merge [as Sean would suggest] - or do the merge in a
>>> > separate/clean clone [I normally do this..]
>>> >
>>> > i.e
>>> > cd ~/Codes
>>> > hg clone petsc-dev petsc-dev-merge
>>> > cd petsc-dev-merge
>>> > hg pull ssh://petsc@petsc.cs.iit.edu//hg/petsc/petsc-dev #just to
>>> be sure, look for latest chagnes before merge..
>>> > hg merge
>>> > hg commit
>>> > hg push ssh://petsc@petsc.cs.iit.edu//hg/petsc/petsc-dev
>>> >
>>> > [now update your petsc-dev to latest]
>>> > cd ~/Codes/petsc-dev
>>> > hg pull
>>> > hg update
>>> >
>>> > Satish
>>> >
>>> > On Wed, 14 Mar 2012, Mark F. Adams wrote:
>>> >
>>> >> Great, that seems to work.
>>> >>
>>> >> I did a 'hg commit tools.c'
>>> >>
>>> >> and I want to push this file only. I guess its the only thing in the
>>> change set so 'hg push' should be fine. But I see this:
>>> >>
>>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg update
>>> >> abort: crosses branches (merge branches or use --clean to discard
>>> changes)
>>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg merge
>>> >> abort: outstanding uncommitted changes (use 'hg status' to list
>>> changes)
>>> >> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg status
>>> >> M include/petscmat.h
>>> >> M include/private/matimpl.h
>>> >> M src/ksp/pc/impls/gamg/agg.c
>>> >> M src/ksp/pc/impls/gamg/gamg.c
>>> >> M src/ksp/pc/impls/gamg/gamg.h
>>> >> M src/ksp/pc/impls/gamg/geo.c
>>> >> M src/mat/coarsen/coarsen.c
>>> >> M src/mat/coarsen/impls/hem/hem.c
>>> >> M src/mat/coarsen/impls/mis/mis.c
>>> >>
>>> >> Am I ready to do a push?
>>> >>
>>> >> Thanks,
>>> >> Mark
>>> >>
>>> >> On Mar 14, 2012, at 2:44 PM, Satish Balay wrote:
>>> >>
>>> >>> If commit is the last hg operation that you've done - then 'hg
>>> rollback' would undo this commit.
>>> >>>
>>> >>> Satish
>>> >>>
>>> >>> On Wed, 14 Mar 2012, Mark F. Adams wrote:
>>> >>>
>>> >>>> Damn, I'm not preallocating the graph perfectly for unsymmetric
>>> matrices and PETSc now dies on this.
>>> >>>>
>>> >>>> I have a fix but I committed it with other changes that I do not
>>> want to commit. The changes are all in one file so I should be able to
>>> just commit this file.
>>> >>>>
>>> >>>> Anyone know how to delete a commit?
>>> >>>>
>>> >>>> I've tried:
>>> >>>>
>>> >>>> ~/Codes/petsc-dev/src/ksp/pc/impls/gamg>hg strip 22487:26ffb9eef17f
>>> >>>> hg: unknown command 'strip'
>>> >>>> 'strip' is provided by the following extension:
>>> >>>>
>>> >>>> mq manage a stack of patches
>>> >>>>
>>> >>>> use "hg help extensions" for information on enabling extensions
>>> >>>>
>>> >>>> But have not figured out how to load extensions.
>>> >>>>
>>> >>>> Mark
>>> >>>>
>>> >>>> On Mar 14, 2012, at 12:54 PM, John Mousel wrote:
>>> >>>>
>>> >>>>> Mark,
>>> >>>>>
>>> >>>>> I have a non-symmetric matrix. I am running with the following
>>> options.
>>> >>>>>
>>> >>>>> -pc_type gamg -pc_gamg_sym_graph -ksp_monitor_true_residual
>>> >>>>>
>>> >>>>> and with the inclusion of -pc_gamg_sym_graph, I get a new malloc
>>> error:
>>> >>>>>
>>> >>>>>
>>> >>>>> 0]PETSC ERROR: --------------------- Error Message
>>> ------------------------------------
>>> >>>>> [0]PETSC ERROR: Argument out of range!
>>> >>>>> [0]PETSC ERROR: New nonzero at (5150,9319) caused a malloc!
>>> >>>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> >>>>> [0]PETSC ERROR: Petsc Development HG revision:
>>> 587b25035091aaa309c87c90ac64c13408ecf34e HG Date: Wed Mar 14 09:22:54 2012
>>> -0500
>>> >>>>> [0]PETSC ERROR: See docs/changes/index.html for recent updates.
>>> >>>>> [0]PETSC ERROR: See docs/faq.html for hints about trouble shooting.
>>> >>>>> [0]PETSC ERROR: See docs/index.html for manual pages.
>>> >>>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> >>>>> [0]PETSC ERROR: ../JohnRepo/VFOLD_exe on a linux-deb named
>>> wv.iihr.uiowa.edu by jmousel Wed Mar 14 11:51:35 2012
>>> >>>>> [0]PETSC ERROR: Libraries linked from
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/linux-debug/lib
>>> >>>>> [0]PETSC ERROR: Configure run at Wed Mar 14 09:46:39 2012
>>> >>>>> [0]PETSC ERROR: Configure options --download-blacs=1
>>> --download-hypre=1 --download-metis=1 --download-ml=1 --download-mpich=1
>>> --download-parmetis=1 --download-scalapack=1
>>> --with-blas-lapack-dir=/opt/intel11/mkl/lib/em64t --with-cc=gcc
>>> --with-cmake=/usr/local/bin/cmake --with-cxx=g++ --with-fc=ifort
>>> PETSC_ARCH=linux-debug
>>> >>>>> [0]PETSC ERROR:
>>> ------------------------------------------------------------------------
>>> >>>>> [0]PETSC ERROR: MatSetValues_MPIAIJ() line 506 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/mat/impls/aij/mpi/mpiaij.c
>>> >>>>> [0]PETSC ERROR: MatSetValues() line 1141 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/mat/interface/matrix.c
>>> >>>>> [0]PETSC ERROR: scaleFilterGraph() line 155 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/tools.c
>>> >>>>> [0]PETSC ERROR: PCGAMGgraph_AGG() line 865 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/agg.c
>>> >>>>> [0]PETSC ERROR: PCSetUp_GAMG() line 516 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/impls/gamg/gamg.c
>>> >>>>> [0]PETSC ERROR: PCSetUp() line 832 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/pc/interface/precon.c
>>> >>>>> [0]PETSC ERROR: KSPSetUp() line 261 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
>>> >>>>> [0]PETSC ERROR: KSPSolve() line 385 in
>>> /home/jmousel/NumericalLibraries/petsc-hg/petsc-dev/src/ksp/ksp/interface/itfunc.c
>>> >>>>>
>>> >>>>>
>>> >>>>> John
>>> >>>>>
>>> >>>>>
>>> >>>>> On Wed, Mar 14, 2012 at 11:27 AM, Mark F. Adams <
>>> mark.adams at columbia.edu> wrote:
>>> >>>>>
>>> >>>>> On Mar 14, 2012, at 11:56 AM, John Mousel wrote:
>>> >>>>>
>>> >>>>>> Mark,
>>> >>>>>>
>>> >>>>>> The matrix is asymmetric. Does this require the setting of an
>>> option?
>>> >>>>>
>>> >>>>> Yes: -pc_gamg_sym_graph
>>> >>>>>
>>> >>>>> Mark
>>> >>>>>
>>> >>>>>> I pulled petsc-dev this morning, so I should have (at least close
>>> to) the latest code.
>>> >>>>>>
>>> >>>>>> John
>>> >>>>>>
>>> >>>>>> On Wed, Mar 14, 2012 at 10:54 AM, Mark F. Adams <
>>> mark.adams at columbia.edu> wrote:
>>> >>>>>>
>>> >>>>>> On Mar 14, 2012, at 11:08 AM, John Mousel wrote:
>>> >>>>>>
>>> >>>>>>> I'm getting the following error when using GAMG.
>>> >>>>>>>
>>> >>>>>>> petsc-dev/src/ksp/pc/impls/gamg/agg.c:508: smoothAggs: Assertion
>>> `sgid==-1' failed.
>>> >>>>>>
>>> >>>>>> Is it possible that your matrix is structurally asymmetric?
>>> >>>>>>
>>> >>>>>> This code is evolving fast and so you will need to move to the
>>> dev version if you are not already using it. (I think I fixed a bug that
>>> hit this assert).
>>> >>>>>>
>>> >>>>>>>
>>> >>>>>>> When I try to alter the type of aggregation at the command line
>>> using -pc_gamg_type pa, I'm getting
>>> >>>>>>>
>>> >>>>>>> [0]PETSC ERROR: [1]PETSC ERROR: --------------------- Error
>>> Message ------------------------------------
>>> >>>>>>> [1]PETSC ERROR: Unknown type. Check for miss-spelling or missing
>>> external package needed for type:
>>> >>>>>>> see
>>> http://www.mcs.anl.gov/petsc/documentation/installation.html#external!
>>> >>>>>>> [1]PETSC ERROR: Unknown GAMG type pa given!
>>> >>>>>>>
>>> >>>>>>> Has there been a change in the aggregation options? I just
>>> pulled petsc-dev this morning.
>>> >>>>>>>
>>> >>>>>>
>>> >>>>>> Yes, this option is gone now. You can use -pc_gamg_type agg for
>>> now.
>>> >>>>>>
>>> >>>>>> Mark
>>> >>>>>>
>>> >>>>>>> John
>>> >>>>>>
>>> >>>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>
>>> >>>>
>>> >>>
>>> >>>
>>> >>
>>> >>
>>> >
>>> >
>>>
>>>
>>
>>
> <GAMG_kspview.txt><ML_kspview.txt><HYPRE_kspview.txt>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120315/fdb4a2d5/attachment-0001.htm>
-------------- next part --------------
[0]PCSetUp_GAMG level 0 N=58507, n data rows=1, n data cols=1, nnz/row (ave)=1, np=4
[0]scaleFilterGraph 80.6997% nnz after filtering, with threshold 0.05, 6.43289 nnz ave.
[0]maxIndSetAgg removed 0 of 13276 vertices.
[0]PCGAMGprolongator_AGG New grid 7604 nodes
[0]PCSetUp_GAMG 1) N=7604, n data cols=1, nnz/row (ave)=6, 4 active pes
[0]scaleFilterGraph 93.3889% nnz after filtering, with threshold 0.05, 6.58382 nnz ave.
[0]maxIndSetAgg removed 0 of 1730 vertices.
[0]PCGAMGprolongator_AGG New grid 895 nodes
[0]PCSetUp_GAMG 2) N=895, n data cols=1, nnz/row (ave)=5, 4 active pes
[0]scaleFilterGraph 91.3043% nnz after filtering, with threshold 0.05, 5.54359 nnz ave.
[0]maxIndSetAgg removed 0 of 195 vertices.
[0]PCGAMGprolongator_AGG New grid 139 nodes
[0]createLevel aggregate processors: npe: 4 --> 1, neq=139
[0]PCSetUp_GAMG 3) N=139, n data cols=1, nnz/row (ave)=4, 1 active pes
[0]PCSetUp_GAMG 4 levels, grid compexity = 1.15398
PCSetUp_GAMG PC setup max eigen=1.875451e+00 min=2.696859e-02
PCSetUp_GAMG PC setup max eigen=1.742779e+00 min=1.770445e-02
PCSetUp_GAMG PC setup max eigen=1.948789e+00 min=3.955759e-02
Residual norms for pres_ solve.
0 KSP Residual norm 5.141026204165e+03
2 KSP Residual norm 1.284239020662e+03
4 KSP Residual norm 9.967668419130e+02
6 KSP Residual norm 5.783283077358e+02
8 KSP Residual norm 5.566458446657e+02
10 KSP Residual norm 3.783499634403e+02
12 KSP Residual norm 3.053155044665e+02
14 KSP Residual norm 2.726314646644e+02
16 KSP Residual norm 1.960952298636e+02
18 KSP Residual norm 1.865981998220e+02
20 KSP Residual norm 1.554042318839e+02
22 KSP Residual norm 1.497631349479e+02
24 KSP Residual norm 4.598812937768e+02
26 KSP Residual norm 1.678435395838e+02
28 KSP Residual norm 2.040759429929e+02
30 KSP Residual norm 2.288608663089e+02
32 KSP Residual norm 2.733802693469e+02
34 KSP Residual norm 1.640058147647e+02
36 KSP Residual norm 1.172045953032e+02
38 KSP Residual norm 1.157357240159e+02
40 KSP Residual norm 1.122520337943e+02
42 KSP Residual norm 9.825251767781e+01
44 KSP Residual norm 1.208285004589e+02
46 KSP Residual norm 6.786930614669e+01
48 KSP Residual norm 4.590732680097e+01
50 KSP Residual norm 3.841598045058e+01
52 KSP Residual norm 2.625665852109e+01
54 KSP Residual norm 1.941467729850e+01
56 KSP Residual norm 1.365056340101e+01
58 KSP Residual norm 6.773865959980e+00
60 KSP Residual norm 5.087803263599e+00
62 KSP Residual norm 4.056350834310e+00
64 KSP Residual norm 2.885709078944e+00
66 KSP Residual norm 3.319001782669e+00
68 KSP Residual norm 2.786670792779e+00
70 KSP Residual norm 5.516775828892e+00
72 KSP Residual norm 4.496277080798e+00
74 KSP Residual norm 1.004981312474e-01
76 KSP Residual norm 5.360056134540e-02
78 KSP Residual norm 4.464985177773e-02
80 KSP Residual norm 3.483114821741e-02
82 KSP Residual norm 1.514538931552e-02
84 KSP Residual norm 1.223770508292e-02
86 KSP Residual norm 1.423206397892e-02
88 KSP Residual norm 9.108998551955e-03
90 KSP Residual norm 7.266553500351e-03
92 KSP Residual norm 9.600829160875e-04
94 KSP Residual norm 2.730957811323e-04
96 KSP Residual norm 1.481227697383e-04
98 KSP Residual norm 8.426581071258e-05
100 KSP Residual norm 7.380041287915e-05
102 KSP Residual norm 5.553587538116e-05
104 KSP Residual norm 4.563527252379e-05
106 KSP Residual norm 3.110759145615e-05
108 KSP Residual norm 1.785372995034e-05
110 KSP Residual norm 1.720065891767e-05
112 KSP Residual norm 1.213804492876e-05
114 KSP Residual norm 2.835812681505e-05
116 KSP Residual norm 8.426402773025e-06
118 KSP Residual norm 3.833374276573e-06
120 KSP Residual norm 4.812273681701e-06
122 KSP Residual norm 1.117614658717e-06
124 KSP Residual norm 7.404391742242e-07
126 KSP Residual norm 5.129803443305e-07
128 KSP Residual norm 4.260359119617e-07
130 KSP Residual norm 4.109770534864e-07
132 KSP Residual norm 5.183915037437e-07
134 KSP Residual norm 1.091650015290e-06
136 KSP Residual norm 1.271770801341e-07
138 KSP Residual norm 2.534603085234e-08
140 KSP Residual norm 2.290908824239e-08
142 KSP Residual norm 5.846953963288e-09
KSP Object:(pres_) 4 MPI processes
type: bcgsl
BCGSL: Ell = 2
BCGSL: Delta = 0
maximum iterations=5000
tolerances: relative=1e-12, absolute=1e-50, divergence=10000
left preconditioning
diagonally scaled system
has attached null space
using nonzero initial guess
using PRECONDITIONED norm type for convergence test
PC Object:(pres_) 4 MPI processes
type: gamg
MG: type is MULTIPLICATIVE, levels=4 cycles=v
Cycles per PCApply=1
Using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (pres_mg_coarse_) 4 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (pres_mg_coarse_) 4 MPI processes
type: bjacobi
block Jacobi: number of blocks = 4
Local solve info for each block is in the following KSP and PC objects:
[0] number of local blocks = 1, first local block number = 0
[0] local block number 0
KSP Object: (pres_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (pres_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5, needed 1.66568
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=139, cols=139
package used to perform factorization: petsc
total: nonzeros=1131, allocated nonzeros=1131
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=139, cols=139
total: nonzeros=679, allocated nonzeros=679
total number of mallocs used during MatSetValues calls =0
not using I-node routines
- - - - - - - - - - - - - - - - - -
[1] number of local blocks = 1, first local block number = 1
[1] local block number 0
KSP Object: (pres_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (pres_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5, needed 0
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=0, cols=0
package used to perform factorization: petsc
total: nonzeros=1, allocated nonzeros=1
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=0, cols=0
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
not using I-node routines
- - - - - - - - - - - - - - - - - -
[2] number of local blocks = 1, first local block number = 2
[2] local block number 0
KSP Object: (pres_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (pres_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5, needed 0
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=0, cols=0
package used to perform factorization: petsc
total: nonzeros=1, allocated nonzeros=1
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=0, cols=0
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
not using I-node routines
- - - - - - - - - - - - - - - - - -
[3] number of local blocks = 1, first local block number = 3
[3] local block number 0
KSP Object: (pres_mg_coarse_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (pres_mg_coarse_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5, needed 0
Factored matrix follows:
Matrix Object: 1 MPI processes
type: seqaij
rows=0, cols=0
package used to perform factorization: petsc
total: nonzeros=1, allocated nonzeros=1
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Matrix Object: 1 MPI processes
type: seqaij
rows=0, cols=0
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
not using I-node routines
- - - - - - - - - - - - - - - - - -
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=139, cols=139
total: nonzeros=679, allocated nonzeros=679
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Down solver (pre-smoother) on level 1 -------------------------------
KSP Object: (pres_mg_levels_1_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (pres_mg_levels_1_) 4 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=895, cols=895
total: nonzeros=4941, allocated nonzeros=4941
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 2 -------------------------------
KSP Object: (pres_mg_levels_2_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (pres_mg_levels_2_) 4 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=7604, cols=7604
total: nonzeros=49366, allocated nonzeros=49366
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
Down solver (pre-smoother) on level 3 -------------------------------
KSP Object: (pres_mg_levels_3_) 4 MPI processes
type: richardson
Richardson: damping factor=1
maximum iterations=1
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
has attached null space
using nonzero initial guess
using NONE norm type for convergence test
PC Object: (pres_mg_levels_3_) 4 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=58507, cols=58507
total: nonzeros=383336, allocated nonzeros=675924
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Up solver (post-smoother) same as down solver (pre-smoother)
linear system matrix = precond matrix:
Matrix Object: 4 MPI processes
type: mpiaij
rows=58507, cols=58507
total: nonzeros=383336, allocated nonzeros=675924
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
More information about the petsc-users
mailing list