[petsc-users] fieldsplit doesn't pass prefix to inner ksp
Matthew Knepley
knepley at gmail.com
Fri Sep 26 08:52:06 CDT 2014
On Fri, Sep 26, 2014 at 7:29 AM, anton <popov at uni-mainz.de> wrote:
> Create preconditioner:
>
> PCCreate(PETSC_COMM_WORLD, &pc);
> PCSetOptionsPrefix(pc, "bf_");
> PCSetFromOptions(pc);
>
> Define fieldsplit options:
>
> -bf_pc_type fieldsplit
> -bf_pc_fieldsplit_type SCHUR
> -bf_pc_fieldsplit_schur_factorization_type UPPER
>
> Works OK.
>
> Set options for the first field solver:
>
> -bf_fieldsplit_0_ksp_type preonly
> -bf_fieldsplit_0_pc_type lu
>
> Doesn't work (ignored), because "bf_" prefix isn't pass to inner solver
> ksp (checked in the debugger).
>
> Indeed, the following works:
>
> -fieldsplit_0_ksp_type preonly
> -fieldsplit_0_pc_type lu
>
> Observed with 3.5 but not with 3.4
>
I just tried this with master on SNES ex19, and got the correct result:
knepley/feature-parallel-partition
*$:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$ ./ex19 -bf_pc_type
fieldsplit -bf_snes_view
./ex19 -bf_pc_type fieldsplit -bf_snes_view
lid velocity = 0.0625, prandtl # = 1, grashof # = 1
SNES Object:(bf_) 1 MPI processes
type: newtonls
maximum iterations=50, maximum function evaluations=10000
tolerances: relative=1e-08, absolute=1e-50, solution=1e-08
total number of linear solver iterations=13
total number of function evaluations=3
SNESLineSearch Object: (bf_) 1 MPI processes
type: bt
interpolation: cubic
alpha=1.000000e-04
maxstep=1.000000e+08, minlambda=1.000000e-12
tolerances: relative=1.000000e-08, absolute=1.000000e-15,
lambda=1.000000e-08
maximum iterations=40
KSP Object: (bf_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (bf_) 1 MPI processes
type: fieldsplit
FieldSplit with MULTIPLICATIVE composition: total splits = 4
Solver info for each split is in the following KSP objects:
Split number 0 Defined by IS
KSP Object: (bf_fieldsplit_x_velocity_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (bf_fieldsplit_x_velocity_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=16, cols=16
package used to perform factorization: petsc
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: (bf_fieldsplit_x_velocity_) 1 MPI
processes
type: seqaij
rows=16, cols=16
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Split number 1 Defined by IS
KSP Object: (bf_fieldsplit_y_velocity_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (bf_fieldsplit_y_velocity_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=16, cols=16
package used to perform factorization: petsc
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: (bf_fieldsplit_y_velocity_) 1 MPI
processes
type: seqaij
rows=16, cols=16
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Split number 2 Defined by IS
KSP Object: (bf_fieldsplit_Omega_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (bf_fieldsplit_Omega_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=16, cols=16
package used to perform factorization: petsc
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: (bf_fieldsplit_Omega_) 1 MPI processes
type: seqaij
rows=16, cols=16
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
Split number 3 Defined by IS
KSP Object: (bf_fieldsplit_temperature_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (bf_fieldsplit_temperature_) 1 MPI processes
type: ilu
ILU: out-of-place factorization
0 levels of fill
tolerance for zero pivot 2.22045e-14
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]
matrix ordering: natural
factor fill ratio given 1, needed 1
Factored matrix follows:
Mat Object: 1 MPI processes
type: seqaij
rows=16, cols=16
package used to perform factorization: petsc
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: (bf_fieldsplit_temperature_) 1 MPI
processes
type: seqaij
rows=16, cols=16
total: nonzeros=64, allocated nonzeros=64
total number of mallocs used during MatSetValues calls =0
not using I-node routines
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=64, cols=64, bs=4
total: nonzeros=1024, allocated nonzeros=1024
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 16 nodes, limit used is 5
Number of SNES iterations = 2
I will try with 3.5.2.
Thanks,
Matt
> Thanks.
> Anton
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140926/0febf141/attachment-0001.html>
More information about the petsc-users
mailing list