[petsc-users] solving system with 2x2 block size
Manav Bhatia
bhatiamanav at gmail.com
Tue Nov 15 15:03:56 CST 2016
Hi,
I am setting up a matrix with the following calls. The intent is to solve the system with a 2x2 block size.
What combinations of KSP/PC will effectively translate to solving this block matrix system?
I saw a discussion about bjacobi in the manual with the following calls (I omitted the prefixes from my actual command):
-pc_type bjacobi -pc_bjacobi_blocks 2 -sub_ksp_type preonly -sub_pc_type lu -ksp_view
which provides the following output:
KSP Object:(fluid_complex_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object:(fluid_complex_) 1 MPI processes
type: bjacobi
block Jacobi: number of blocks = 2
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (fluid_complex_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using NONE norm type for convergence test
PC Object: (fluid_complex_sub_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5., needed 5.70941
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=36844, cols=36844
package used to perform factorization: petsc
total: nonzeros=14748816, allocated nonzeros=14748816
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 9211 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: (fluid_complex_) 1 MPI processes
type: seqaij
rows=36844, cols=36844
total: nonzeros=2583248, allocated nonzeros=2583248
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 9211 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: (fluid_complex_) 1 MPI processes
type: seqaij
rows=73688, cols=73688, bs=2
total: nonzeros=5224384, allocated nonzeros=5224384
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 18422 nodes, limit used is 5
Likewise, I tried to use a more generic option:
-mat_set_block_size 2 -ksp_type gmres -pc_type ilu -sub_ksp_type preonly -sub_pc_type lu -ksp_view
with the following output:
Linear fluid_complex_ solve converged due to CONVERGED_RTOL iterations 38
KSP Object:(fluid_complex_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object:(fluid_complex_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
matrix ordering: natural
factor fill ratio given 1., needed 1.
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=73688, cols=73688, bs=2
package used to perform factorization: petsc
total: nonzeros=5224384, allocated nonzeros=5224384
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 18422 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: (fluid_complex_) 1 MPI processes
type: seqaij
rows=73688, cols=73688, bs=2
total: nonzeros=5224384, allocated nonzeros=5224384
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 18422 nodes, limit used is 5
Are other PC types expected to translate to the block matrices?
I would appreciate any guidance.
Thanks,
Manav
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20161115/06f9f7f8/attachment.html>
More information about the petsc-users
mailing list