[petsc-users] solving system with 2x2 block size
Barry Smith
bsmith at mcs.anl.gov
Tue Nov 15 15:12:49 CST 2016
We can help you if you provide more information about what the blocks represent in your problem.
Do you have two degrees of freedom at each grid point? What physically are the two degrees of freedom. What equations are you solving?
I think you may be mixing up the "matrix block size" of 2 with the blocks in "block Jacobi". Though both are called "block" they really don't have anything to do with each other.
Barry
> On Nov 15, 2016, at 3:03 PM, Manav Bhatia <bhatiamanav at gmail.com> wrote:
>
> Hi,
>
> I am setting up a matrix with the following calls. The intent is to solve the system with a 2x2 block size.
>
> What combinations of KSP/PC will effectively translate to solving this block matrix system?
>
> I saw a discussion about bjacobi in the manual with the following calls (I omitted the prefixes from my actual command):
> -pc_type bjacobi -pc_bjacobi_blocks 2 -sub_ksp_type preonly -sub_pc_type lu -ksp_view
>
> which provides the following output:
> KSP Object:(fluid_complex_) 1 MPI processes
> type: gmres
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> PC Object:(fluid_complex_) 1 MPI processes
> type: bjacobi
> block Jacobi: number of blocks = 2
> Local solve is same for all blocks, in the following KSP and PC objects:
> KSP Object: (fluid_complex_sub_) 1 MPI processes
> type: preonly
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using NONE norm type for convergence test
> PC Object: (fluid_complex_sub_) 1 MPI processes
> type: lu
> LU: out-of-place factorization
> tolerance for zero pivot 2.22045e-14
> matrix ordering: nd
> factor fill ratio given 5., needed 5.70941
> Factored matrix follows:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=36844, cols=36844
> package used to perform factorization: petsc
> total: nonzeros=14748816, allocated nonzeros=14748816
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 9211 nodes, limit used is 5
> linear system matrix = precond matrix:
> Mat Object: (fluid_complex_) 1 MPI processes
> type: seqaij
> rows=36844, cols=36844
> total: nonzeros=2583248, allocated nonzeros=2583248
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 9211 nodes, limit used is 5
> linear system matrix = precond matrix:
> Mat Object: (fluid_complex_) 1 MPI processes
> type: seqaij
> rows=73688, cols=73688, bs=2
> total: nonzeros=5224384, allocated nonzeros=5224384
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 18422 nodes, limit used is 5
>
>
> Likewise, I tried to use a more generic option:
> -mat_set_block_size 2 -ksp_type gmres -pc_type ilu -sub_ksp_type preonly -sub_pc_type lu -ksp_view
>
> with the following output:
> Linear fluid_complex_ solve converged due to CONVERGED_RTOL iterations 38
> KSP Object:(fluid_complex_) 1 MPI processes
> type: gmres
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
> GMRES: happy breakdown tolerance 1e-30
> maximum iterations=10000, initial guess is zero
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
> left preconditioning
> using PRECONDITIONED norm type for convergence test
> PC Object:(fluid_complex_) 1 MPI processes
> type: ilu
> ILU: out-of-place factorization
> 0 levels of fill
> tolerance for zero pivot 2.22045e-14
> matrix ordering: natural
> factor fill ratio given 1., needed 1.
> Factored matrix follows:
> Mat Object: 1 MPI processes
> type: seqaij
> rows=73688, cols=73688, bs=2
> package used to perform factorization: petsc
> total: nonzeros=5224384, allocated nonzeros=5224384
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 18422 nodes, limit used is 5
> linear system matrix = precond matrix:
> Mat Object: (fluid_complex_) 1 MPI processes
> type: seqaij
> rows=73688, cols=73688, bs=2
> total: nonzeros=5224384, allocated nonzeros=5224384
> total number of mallocs used during MatSetValues calls =0
> using I-node routines: found 18422 nodes, limit used is 5
>
> Are other PC types expected to translate to the block matrices?
>
> I would appreciate any guidance.
>
> Thanks,
> Manav
>
More information about the petsc-users
mailing list