[petsc-users] 2 level schur
Luc Berger-Vergiat
lb2653 at columbia.edu
Fri Mar 21 09:37:55 CDT 2014
Is there a way to now what the new numbering is?
I am assuming that in y example since there are two fields only the
numbers associated with them are 0 and 1 hence I tried:
-fieldsplit_0_fieldsplit_Field_2_fields 1
-fieldsplit_0_fieldsplit_Field_3_fields 0
which did not work. As mentioned earlier, the following does not work
either:
-fieldsplit_0_fieldsplit_Field_2_fields 3
-fieldsplit_0_fieldsplit_Field_3_fields 2
and without too much expectation I also passed the following
-fieldsplit_0_fieldsplit_Field_2_fields Field_3
-fieldsplit_0_fieldsplit_Field_3_fields Field_2
to no avail.
By the way I attached the output from -ksp_view in case I might be doing
something wrong?
Best,
Luc
On 03/20/2014 09:01 PM, Matthew Knepley wrote:
> On Thu, Mar 20, 2014 at 6:20 PM, Luc Berger-Vergiat
> <lb2653 at columbia.edu <mailto:lb2653 at columbia.edu>> wrote:
>
> Hi all,
> I am solving a four field problem using two Schur complements.
> Here are the arguments that I usually pass to PETSc to do it:
>
> -ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
> -pc_fieldsplit_schur_factorization_type full
> -pc_fieldsplit_schur_precondition selfp
> -pc_fieldsplit_0_fields 2,3 -pc_fieldsplit_1_fields 0,1
> -fieldsplit_0_ksp_type preonly -fieldsplit_0_pc_type
> fieldsplit -fieldsplit_0_pc_fieldsplit_type schur
> -fieldsplit_0_pc_fieldsplit_schur_factorization_type full
> -fieldsplit_0_pc_fieldsplit_schur_precondition selfp
> -fieldsplit_0_fieldsplit_Field_2_fields 2
> -fieldsplit_0_fieldsplit_Field_3_fields 3
> -fieldsplit_0_fieldsplit_Field_2_ksp_type preonly
> -fieldsplit_0_fieldsplit_Field_2_pc_type ilu
> -fieldsplit_0_fieldsplit_Field_3_ksp_type preonly
> -fieldsplit_0_fieldsplit_Field_3_pc_type jacobi
> -fieldsplit_1_ksp_type preonly -fieldsplit_1_pc_type lu
> -malloc_log mlog -log_summary time.log
>
> One issue with this is that when I change
> -fieldsplit_0_fieldsplit_Field_2_fields 2 to
> -fieldsplit_0_fieldsplit_Field_2_fields 3 it is ineffective, as if
> PETSc automatically assign IS 2 to Field 2 even though it is not
> what I want.
> Is there a way to pass the arguments correctly so that PETSc goes
> about switching the IS set of -fieldsplit_0_fieldsplit_Field_2 and
> -fieldsplit_0_fieldsplit_Field_3?
> This is crucial to me since I am using the selfp option and the
> matrix associated to IS 3 is diagonal. By assigning the fields
> correctly I can get an exact Schur preconditioner and hence very
> fast convergence. Right now my convergence is not optimal because
> of this.
>
>
> I believe the inner Schur field statements should not be using the
> original numbering, but the inner numbering, after they have been
> reordered.
>
> Matt
>
> Thanks!
>
> Best,
> Luc
>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140321/8f2a1671/attachment-0001.html>
-------------- next part --------------
KSP Object: 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, factorization FULL
Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
KSP Object: (fieldsplit_0_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, factorization FULL
Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
KSP Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: seqaij
rows=1600, cols=1600
total: nonzeros=25600, allocated nonzeros=25600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=400, cols=400
package used to perform factorization: petsc
total: nonzeros=1600, allocated nonzeros=1600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: schurcomplement
rows=400, cols=400
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: seqaij
rows=400, cols=400
total: nonzeros=1600, allocated nonzeros=1600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
A10
Mat Object: 1 MPI processes
type: seqaij
rows=400, cols=1600
total: nonzeros=6400, allocated nonzeros=6400
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
KSP of A00
KSP Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: seqaij
rows=1600, cols=1600
total: nonzeros=25600, allocated nonzeros=25600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
A01
Mat Object: 1 MPI processes
type: seqaij
rows=1600, cols=400
total: nonzeros=6400, allocated nonzeros=6400
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
Mat Object: 1 MPI processes
type: seqaij
rows=400, cols=400
total: nonzeros=1600, allocated nonzeros=1600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: seqaij
rows=2000, cols=2000
total: nonzeros=40000, allocated nonzeros=40000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (fieldsplit_1_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_1_) 1 MPI processes
type: lu
LU: out-of-place factorization
tolerance for zero pivot 2.22045e-14
matrix ordering: nd
factor fill ratio given 5, needed 2.62994
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=330
package used to perform factorization: petsc
total: nonzeros=20098, allocated nonzeros=20098
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 106 nodes, limit used is 5
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_1_) 1 MPI processes
type: schurcomplement
rows=330, cols=330
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_1_) 1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
A10
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=2000
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
KSP of A00
KSP Object: (fieldsplit_0_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, factorization FULL
Preconditioner for the Schur complement formed from Sp, an assembled approximation to S, which uses A00's diagonal's inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
KSP Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: seqaij
rows=1600, cols=1600
total: nonzeros=25600, allocated nonzeros=25600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=400, cols=400
package used to perform factorization: petsc
total: nonzeros=1600, allocated nonzeros=1600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: schurcomplement
rows=400, cols=400
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_0_fieldsplit_Field_3_) 1 MPI processes
type: seqaij
rows=400, cols=400
total: nonzeros=1600, allocated nonzeros=1600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
A10
Mat Object: 1 MPI processes
type: seqaij
rows=400, cols=1600
total: nonzeros=6400, allocated nonzeros=6400
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
KSP of A00
KSP Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: jacobi
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_fieldsplit_Field_2_) 1 MPI processes
type: seqaij
rows=1600, cols=1600
total: nonzeros=25600, allocated nonzeros=25600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
A01
Mat Object: 1 MPI processes
type: seqaij
rows=1600, cols=400
total: nonzeros=6400, allocated nonzeros=6400
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
Mat Object: 1 MPI processes
type: seqaij
rows=400, cols=400
total: nonzeros=1600, allocated nonzeros=1600
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 100 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: seqaij
rows=2000, cols=2000
total: nonzeros=40000, allocated nonzeros=40000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
A01
Mat Object: 1 MPI processes
type: seqaij
rows=2000, cols=330
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=2330, cols=2330
total: nonzeros=93242, allocated nonzeros=93242
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 521 nodes, limit used is 5
More information about the petsc-users
mailing list