<html>
<head>
<meta http-equiv="content-type" content="text/html;
charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font face="Ubuntu">Hi,<br>
<br>
I am successfully using PETSc (v3.4.2) to solve a 3D Poisson's
equation with CG + GAMG as I was suggested to do in a previous
thread. <br>
So far I am using GAMG with the default settings, i.e.<br>
</font><br>
<font face="Ubuntu"><font face="Ubuntu">-pc_type gamg
-pc_gamg_agg_nsmooths 1 <br>
<br>
The speed of the solution is satisfactory, but I would like to
know if you have any suggestions to further speed it up,
particularly<br>
if there is any parameters worth looking into to achieve an even
faster solution, for example number of levels and so on.<br>
So far I am using Dirichlet's BCs for my test case, but I will
soon have periodic conditions: in this case, does GAMG require
particular settings?<br>
</font>Finally, I did not try geometric multigrid: do you think it
is worth a shot?<br>
<br>
Here are my current settings:<br>
<br>
I run with<br>
<br>
-pc_type gamg -pc_gamg_agg_nsmooths 1 -ksp_view -options_left<br>
<br>
and the output is:<br>
<br>
KSP Object: 4 MPI processes<br>
type: cg<br>
maximum iterations=10000<br>
tolerances: relative=1e-08, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using nonzero initial guess<br>
using UNPRECONDITIONED norm type for convergence test<br>
PC Object: 4 MPI processes<br>
type: gamg<br>
MG: type is MULTIPLICATIVE, levels=3 cycles=v<br>
Cycles per PCApply=1<br>
Using Galerkin computed coarse grid matrices<br>
Coarse grid solver -- level -------------------------------<br>
KSP Object: (mg_coarse_) 4 MPI processes<br>
type: preonly<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (mg_coarse_) 4 MPI processes<br>
type: bjacobi<br>
block Jacobi: number of blocks = 4<br>
Local solve info for each block is in the following KSP
and PC objects:<br>
[0] number of local blocks = 1, first local block number = 0<br>
[0] local block number 0<br>
KSP Object: (mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
KSP Object: (mg_coarse_sub_) left
preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
1 MPI processes<br>
type: lu<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
LU: out-of-place factorization<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (mg_coarse_sub_) 1 MPI
processes<br>
type: lu<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot<br>
matrix ordering: nd<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
factor fill ratio given 5, needed 4.13207<br>
Factored matrix follows:<br>
Matrix Object: Matrix
Object: 1 MPI processes<br>
type: seqaij<br>
rows=395, cols=395<br>
package used to perform factorization: petsc<br>
total: nonzeros=132379, allocated
nonzeros=132379<br>
total number of mallocs used during MatSetValues
calls =0<br>
not using I-node routines<br>
1 MPI processes<br>
type: seqaij<br>
linear system matrix = precond matrix:<br>
rows=0, cols=0<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during
MatSetValues calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
Matrix Object:KSP Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls
=0<br>
not using I-node routines<br>
rows=395, cols=395<br>
total: nonzeros=32037, allocated nonzeros=32037<br>
total number of mallocs used during MatSetValues calls
=0<br>
not using I-node routines<br>
- - - - - - - - - - - - - - - - - -<br>
KSP Object: (mg_coarse_sub_) 1 MPI
processes<br>
type: preonly<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues
calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls
=0<br>
not using I-node routines<br>
(mg_coarse_sub_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (mg_coarse_sub_) 1 MPI processes<br>
type: lu<br>
LU: out-of-place factorization<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot<br>
matrix ordering: nd<br>
factor fill ratio given 5, needed 0<br>
Factored matrix follows:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0<br>
package used to perform factorization: petsc<br>
total: nonzeros=1, allocated nonzeros=1<br>
total number of mallocs used during MatSetValues
calls =0<br>
not using I-node routines<br>
linear system matrix = precond matrix:<br>
Matrix Object: 1 MPI processes<br>
type: seqaij<br>
rows=0, cols=0<br>
total: nonzeros=0, allocated nonzeros=0<br>
total number of mallocs used during MatSetValues calls
=0<br>
not using I-node routines<br>
[1] number of local blocks = 1, first local block number = 1<br>
[1] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[2] number of local blocks = 1, first local block number = 2<br>
[2] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
[3] number of local blocks = 1, first local block number = 3<br>
[3] local block number 0<br>
- - - - - - - - - - - - - - - - - -<br>
linear system matrix = precond matrix:<br>
Matrix Object: 4 MPI processes<br>
type: mpiaij<br>
rows=395, cols=395<br>
total: nonzeros=32037, allocated nonzeros=32037<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node (on process 0) routines<br>
Down solver (pre-smoother) on level 1
-------------------------------<br>
KSP Object: (mg_levels_1_) 4 MPI processes<br>
type: chebyshev<br>
Chebyshev: eigenvalue estimates: min = 0.0636225, max =
1.33607<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (mg_levels_1_) 4 MPI processes<br>
type: jacobi<br>
linear system matrix = precond matrix:<br>
Matrix Object: 4 MPI processes<br>
type: mpiaij<br>
rows=23918, cols=23918<br>
total: nonzeros=818732, allocated nonzeros=818732<br>
total number of mallocs used during MatSetValues calls =0<br>
not using I-node (on process 0) routines<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
Down solver (pre-smoother) on level 2
-------------------------------<br>
KSP Object: (mg_levels_2_) 4 MPI processes<br>
type: chebyshev<br>
Chebyshev: eigenvalue estimates: min = 0.0971369, max =
2.03987<br>
maximum iterations=2<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using nonzero initial guess<br>
using NONE norm type for convergence test<br>
PC Object: (mg_levels_2_) 4 MPI processes<br>
type: jacobi<br>
linear system matrix = precond matrix:<br>
Matrix Object: 4 MPI processes<br>
type: mpiaij<br>
rows=262144, cols=262144<br>
total: nonzeros=1835008, allocated nonzeros=1835008<br>
total number of mallocs used during MatSetValues calls =0<br>
Up solver (post-smoother) same as down solver (pre-smoother)<br>
linear system matrix = precond matrix:<br>
Matrix Object: 4 MPI processes<br>
type: mpiaij<br>
rows=262144, cols=262144<br>
total: nonzeros=1835008, allocated nonzeros=1835008<br>
total number of mallocs used during MatSetValues calls =0<br>
#PETSc Option Table entries:<br>
-ksp_view<br>
-options_left<br>
-pc_gamg_agg_nsmooths 1<br>
-pc_type gamg<br>
#End of PETSc Option Table entries<br>
There are no unused options.<br>
<br>
</font><br>
<font face="Ubuntu"><font face="Ubuntu">Thank you,<br>
Michele</font> </font>
</body>
</html>