[petsc-users] Multigrid coarse grid solver
Garth N. Wells
gnw20 at cam.ac.uk
Wed Apr 26 17:25:42 CDT 2017
I'm a bit confused by the selection of the coarse grid solver for
multigrid. For the demo ksp/ex56, if I do:
mpirun -np 1 ./ex56 -ne 16 -ksp_view -pc_type gamg
-mg_coarse_ksp_type preonly -mg_coarse_pc_type lu
I see
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 1 MPI processes
type: lu
out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5., needed 1.
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=6, cols=6, bs=6
package used to perform factorization: petsc
total: nonzeros=36, allocated nonzeros=36
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 2 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=6, cols=6, bs=6
total: nonzeros=36, allocated nonzeros=36
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 2 nodes, limit used is 5
which is what I expect. Increasing from 1 to 2 processes:
mpirun -np 2 ./ex56 -ne 16 -ksp_view -pc_type gamg
-mg_coarse_ksp_type preonly -mg_coarse_pc_type lu
I see
Coarse grid solver -- level -------------------------------
KSP Object: (mg_coarse_) 2 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using NONE norm type for convergence test
PC Object: (mg_coarse_) 2 MPI processes
type: lu
out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 0., needed 0.
Factored matrix follows:
Mat Object: 2 MPI processes
type: superlu_dist
rows=6, cols=6
package used to perform factorization: superlu_dist
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
SuperLU_DIST run parameters:
Process grid nprow 2 x npcol 1
Equilibrate matrix TRUE
Matrix input mode 1
Replace tiny pivots FALSE
Use iterative refinement FALSE
Processors in row 2 col partition 1
Row permutation LargeDiag
Column permutation METIS_AT_PLUS_A
Parallel symbolic factorization FALSE
Repeated factorization SamePattern
linear system matrix = precond matrix:
Mat Object: 2 MPI processes
type: mpiaij
rows=6, cols=6, bs=6
total: nonzeros=36, allocated nonzeros=36
total number of mallocs used during MatSetValues calls =0
using I-node (on process 0) routines: found 2 nodes, limit used is 5
Note that the coarse grid is now using superlu_dist. Is the coarse
grid being solved in parallel?
Garth
More information about the petsc-users
mailing list