[petsc-users] Confusing Schur preconditioner behaviour
Cotter, Colin J
colin.cotter at imperial.ac.uk
Tue Mar 26 13:53:59 CDT 2019
Hi Dave,
Thanks for the tip - you were right, and this works better for higher resolutions now.
all the best
--Colin
________________________________
From: Dave May <dave.mayhem23 at gmail.com>
Sent: 19 March 2019 11:25:11
To: Cotter, Colin J
Cc: petsc-users at mcs.anl.gov
Subject: Re: [petsc-users] Confusing Schur preconditioner behaviour
Hi Colin,
On Tue, 19 Mar 2019 at 09:33, Cotter, Colin J <colin.cotter at imperial.ac.uk<mailto:colin.cotter at imperial.ac.uk>> wrote:
Hi Dave,
>If you are doing that, then you need to tell fieldsplit to use the Amat to define the splits otherwise it will define the Schur compliment as
>S = B22 - B21 inv(B11) B12
>preconditiones with B22, where as what you want is
>S = A22 - A21 inv(A11) A12
>preconditioned with B22.
>If your operators are set up this way and you didn't indicate to use Amat to define S this would definitely explain why preonly works but iterating on Schur does not.
Yes, thanks - this solves it! I need pc_use_amat.
Okay great. But doesn't that option eradicate your custom Schur complement object which you inserted into the Bmat in the (2,2) slot?
I thought you would use the option
-pc_fieldsplit_diag_use_amat
In general for fieldsplit (Schur) I found that the best way to manage user defined Schur complement preconditioners is via PCFieldSplitSetSchurPre().
https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/PC/PCFieldSplitSetSchurPre.html#PCFieldSplitSetSchurPre
Also, for solver debugging purposes with fieldsplit and MatNest, I find it incredibly useful to attach textual names to all the matrices going to into FieldSplit. You can use PetscObjectSetName() with each of your sub-matrices in the Amat and the Bmat, and any schur complement operators. The textual names will be displayed in KSP view. In that way you have a better chance of understanding which operators are being used where. (Note that this trick is less useful with the Amat and Bmat are AIJ matrices).
Below is an example KSPView associated with 2x2 block system where I've attached the names Auu,Aup,Apu,App, and S* to the Amat sub-matices and the schur complement preconditioner.
PC Object:(dcy_) 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, factorization FULL
Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses (lumped, if requested) A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
KSP Object: (dcy_fieldsplit_u_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using NONE norm type for convergence test
PC Object: (dcy_fieldsplit_u_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 0., needed 0.
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=85728, cols=85728
package used to perform factorization: umfpack
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
not using I-node routines
UMFPACK run parameters:
Control[UMFPACK_PRL]: 1.
Control[UMFPACK_STRATEGY]: 0.
Control[UMFPACK_DENSE_COL]: 0.2
Control[UMFPACK_DENSE_ROW]: 0.2
Control[UMFPACK_AMD_DENSE]: 10.
Control[UMFPACK_BLOCK_SIZE]: 32.
Control[UMFPACK_FIXQ]: 0.
Control[UMFPACK_AGGRESSIVE]: 1.
Control[UMFPACK_PIVOT_TOLERANCE]: 0.1
Control[UMFPACK_SYM_PIVOT_TOLERANCE]: 0.001
Control[UMFPACK_SCALE]: 1.
Control[UMFPACK_ALLOC_INIT]: 0.7
Control[UMFPACK_DROPTOL]: 0.
Control[UMFPACK_IRSTEP]: 0.
Control[UMFPACK_ORDERING]: AMD (not using the PETSc ordering)
linear system matrix = precond matrix:
Mat Object: Auu (dcy_fieldsplit_u_) 1 MPI processes
type: seqaij
rows=85728, cols=85728
total: nonzeros=1028736, allocated nonzeros=1028736
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 21432 nodes, limit used is 5
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (dcy_fieldsplit_p_) 1 MPI processes
type: fgmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=300, initial guess is zero
tolerances: relative=0.01, absolute=1e-50, divergence=10000.
right preconditioning
using UNPRECONDITIONED norm type for convergence test
PC Object: (dcy_fieldsplit_p_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 0., needed 0.
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=32542, cols=32542
package used to perform factorization: umfpack
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
not using I-node routines
UMFPACK run parameters:
Control[UMFPACK_PRL]: 1.
Control[UMFPACK_STRATEGY]: 0.
Control[UMFPACK_DENSE_COL]: 0.2
Control[UMFPACK_DENSE_ROW]: 0.2
Control[UMFPACK_AMD_DENSE]: 10.
Control[UMFPACK_BLOCK_SIZE]: 32.
Control[UMFPACK_FIXQ]: 0.
Control[UMFPACK_AGGRESSIVE]: 1.
Control[UMFPACK_PIVOT_TOLERANCE]: 0.1
Control[UMFPACK_SYM_PIVOT_TOLERANCE]: 0.001
Control[UMFPACK_SCALE]: 1.
Control[UMFPACK_ALLOC_INIT]: 0.7
Control[UMFPACK_DROPTOL]: 0.
Control[UMFPACK_IRSTEP]: 0.
Control[UMFPACK_ORDERING]: AMD (not using the PETSc ordering)
linear system matrix followed by preconditioner matrix:
Mat Object: (dcy_fieldsplit_p_) 1 MPI processes
type: schurcomplement
rows=32542, cols=32542
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: App (dcy_fieldsplit_p_) 1 MPI processes
type: seqaij
rows=32542, cols=32542
total: nonzeros=548482, allocated nonzeros=548482
total number of mallocs used during MatSetValues calls =0
not using I-node routines
A10
Mat Object: Apu 1 MPI processes
type: seqaij
rows=32542, cols=85728
total: nonzeros=857280, allocated nonzeros=921990
total number of mallocs used during MatSetValues calls =0
not using I-node routines
KSP of A00
KSP Object: (dcy_fieldsplit_u_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000.
left preconditioning
using NONE norm type for convergence test
PC Object: (dcy_fieldsplit_u_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 0., needed 0.
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=85728, cols=85728
package used to perform factorization: umfpack
total: nonzeros=0, allocated nonzeros=0
total number of mallocs used during MatSetValues calls =0
not using I-node routines
UMFPACK run parameters:
Control[UMFPACK_PRL]: 1.
Control[UMFPACK_STRATEGY]: 0.
Control[UMFPACK_DENSE_COL]: 0.2
Control[UMFPACK_DENSE_ROW]: 0.2
Control[UMFPACK_AMD_DENSE]: 10.
Control[UMFPACK_BLOCK_SIZE]: 32.
Control[UMFPACK_FIXQ]: 0.
Control[UMFPACK_AGGRESSIVE]: 1.
Control[UMFPACK_PIVOT_TOLERANCE]: 0.1
Control[UMFPACK_SYM_PIVOT_TOLERANCE]: 0.001
Control[UMFPACK_SCALE]: 1.
Control[UMFPACK_ALLOC_INIT]: 0.7
Control[UMFPACK_DROPTOL]: 0.
Control[UMFPACK_IRSTEP]: 0.
Control[UMFPACK_ORDERING]: AMD (not using the PETSc ordering)
linear system matrix = precond matrix:
Mat Object: Auu (dcy_fieldsplit_u_) 1 MPI processes
type: seqaij
rows=85728, cols=85728
total: nonzeros=1028736, allocated nonzeros=1028736
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 21432 nodes, limit used is 5
A01
Mat Object: Aup 1 MPI processes
type: seqaij
rows=85728, cols=32542
total: nonzeros=857280, allocated nonzeros=942634
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 21432 nodes, limit used is 5
Mat Object: S* 1 MPI processes
type: seqaij
rows=32542, cols=32542
total: nonzeros=548482, allocated nonzeros=548482
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: nest
rows=118270, cols=118270
has attached null space
Matrix object:
type=nest, rows=2, cols=2
MatNest structure:
(0,0) : name="Auu", prefix="dcy_fieldsplit_u_", type=seqaij, rows=85728, cols=85728
(0,1) : name="Aup", type=seqaij, rows=85728, cols=32542
(1,0) : name="Apu", type=seqaij, rows=32542, cols=85728
(1,1) : name="App", prefix="dcy_fieldsplit_p_", type=seqaij, rows=32542, cols=32542
all the best
--Colin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190326/41b61a64/attachment-0001.html>
More information about the petsc-users
mailing list