[petsc-users] fieldsplit_0_ monitor in combination with selfp
Klaij, Christiaan
C.Klaij at marin.nl
Thu Sep 4 07:26:32 CDT 2014
Sorry, here's the ksp_view. I'm expecting
-fieldsplit_1_inner_ksp_type preonly
to set the ksp(A00) in the Schur complement only, but it seems to set it in the inv(A00) of the diagonal as well.
Chris
0 KSP Residual norm 1.229687498638e+00
Residual norms for fieldsplit_1_ solve.
0 KSP Residual norm 7.185799114488e+01
1 KSP Residual norm 3.873274154012e+01
1 KSP Residual norm 1.107969383366e+00
KSP Object: 1 MPI processes
type: fgmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
right preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, factorization LOWER
Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses (the lumped) A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
KSP Object: (fieldsplit_0_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI processes
type: bjacobi
block Jacobi: number of blocks = 1
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (fieldsplit_0_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_sub_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=48, cols=48
package used to perform factorization: petsc
total: nonzeros=200, allocated nonzeros=200
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: seqaij
rows=48, cols=48
total: nonzeros=200, allocated nonzeros=240
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: mpiaij
rows=48, cols=48
total: nonzeros=200, allocated nonzeros=480
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (fieldsplit_1_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (fieldsplit_1_) 1 MPI processes
type: bjacobi
block Jacobi: number of blocks = 1
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (fieldsplit_1_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_1_sub_) 1 MPI processes
type: bjacobi
block Jacobi: number of blocks = 1
Local solve is same for all blocks, in the following KSP and PC objects:
KSP Object: (fieldsplit_1_sub_sub_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_1_sub_sub_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=24, cols=24
package used to perform factorization: petsc
total: nonzeros=120, allocated nonzeros=120
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=24, cols=24
total: nonzeros=120, allocated nonzeros=120
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: mpiaij
rows=24, cols=24
total: nonzeros=120, allocated nonzeros=120
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_1_) 1 MPI processes
type: schurcomplement
rows=24, cols=24
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_1_) 1 MPI processes
type: mpiaij
rows=24, cols=24
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
using I-node (on process 0) routines: found 5 nodes, limit used is 5
A10
Mat Object: (a10_) 1 MPI processes
type: mpiaij
rows=24, cols=48
total: nonzeros=96, allocated nonzeros=96
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
KSP of A00
KSP Object: (fieldsplit_1_inner_) 1 MPI processes
type: preonly
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_1_inner_) 1 MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: mpiaij
rows=48, cols=48
total: nonzeros=200, allocated nonzeros=480
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
A01
Mat Object: (a01_) 1 MPI processes
type: mpiaij
rows=48, cols=24
total: nonzeros=96, allocated nonzeros=480
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
Mat Object: 1 MPI processes
type: mpiaij
rows=24, cols=24
total: nonzeros=120, allocated nonzeros=120
total number of mallocs used during MatSetValues calls =0
not using I-node (on process 0) routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: nest
rows=72, cols=72
Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : prefix="fieldsplit_0_", type=mpiaij, rows=48, cols=48
(0,1) : prefix="a01_", type=mpiaij, rows=48, cols=24
(1,0) : prefix="a10_", type=mpiaij, rows=24, cols=48
(1,1) : prefix="fieldsplit_1_", type=mpiaij, rows=24, cols=24
From: Matthew Knepley <knepley at gmail.com>
Sent: Thursday, September 04, 2014 2:20 PM
To: Klaij, Christiaan
Cc: petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] fieldsplit_0_ monitor in combination with selfp
On Thu, Sep 4, 2014 at 7:06 AM, Klaij, Christiaan <C.Klaij at marin.nl> wrote:
I'm playing with the selfp option in fieldsplit using
snes/examples/tutorials/ex70.c. For example:
mpiexec -n 2 ./ex70 -nx 4 -ny 6 \
-ksp_type fgmres \
-pc_type fieldsplit \
-pc_fieldsplit_type schur \
-pc_fieldsplit_schur_fact_type lower \
-pc_fieldsplit_schur_precondition selfp \
-fieldsplit_1_inner_ksp_type preonly \
-fieldsplit_1_inner_pc_type jacobi \
-fieldsplit_0_ksp_monitor -fieldsplit_0_ksp_max_it 1 \
-fieldsplit_1_ksp_monitor -fieldsplit_1_ksp_max_it 1 \
-ksp_monitor -ksp_max_it 1
gives the following output
0 KSP Residual norm 1.229687498638e+00
Residual norms for fieldsplit_1_ solve.
0 KSP Residual norm 2.330138480101e+01
1 KSP Residual norm 1.609000846751e+01
1 KSP Residual norm 1.180287268335e+00
To my suprise I don't see anything for the fieldsplit_0_ solve,
why?
Always run with -ksp_view for any solver question.
Thanks,
Matt
Furthermore, if I understand correctly the above should be
exactly equivalent with
mpiexec -n 2 ./ex70 -nx 4 -ny 6 \
-ksp_type fgmres \
-pc_type fieldsplit \
-pc_fieldsplit_type schur \
-pc_fieldsplit_schur_fact_type lower \
-user_ksp \
-fieldsplit_0_ksp_monitor -fieldsplit_0_ksp_max_it 1 \
-fieldsplit_1_ksp_monitor -fieldsplit_1_ksp_max_it 1 \
-ksp_monitor -ksp_max_it 1
0 KSP Residual norm 1.229687498638e+00
Residual norms for fieldsplit_0_ solve.
0 KSP Residual norm 5.486639587672e-01
1 KSP Residual norm 6.348354253703e-02
Residual norms for fieldsplit_1_ solve.
0 KSP Residual norm 2.321938107977e+01
1 KSP Residual norm 1.605484031258e+01
1 KSP Residual norm 1.183225251166e+00
because -user_ksp replaces the Schur complement by the simple
approximation A11 - A10 inv(diag(A00)) A01. Beside the missing
fielsplit_0_ part, the numbers are pretty close but not exactly
the same. Any explanation?
Chris
dr. ir. Christiaan Klaij
CFD Researcher
Research & Development
E mailto:C.Klaij at marin.nl
T +31 317 49 33 44
MARIN
2, Haagsteeg, P.O. Box 28, 6700 AA Wageningen, The Netherlands
T +31 317 49 39 11, F +31 317 49 32 45, I www.marin.nl
--
What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
-- Norbert Wiener
More information about the petsc-users
mailing list