[petsc-users] Error while calling a fieldsplit preconditioner containing a mg/gamg preconditioner
Luc Berger-Vergiat
lb2653 at columbia.edu
Fri Nov 21 09:55:57 CST 2014
Hi all,
I am using a DMShell to create to use a fieldsplit preconditioner.
I would like to try some of petsc's multigrid options: mg and gamg.
When I call my preconditioner:
-ksp_type gmres -pc_type fieldsplit -pc_fieldsplit_type schur
-pc_fieldsplit_schur_factorization_type full
-pc_fieldsplit_schur_precondition selfp -pc_fieldsplit_0_fields 2,3
-pc_fieldsplit_1_fields 0,1 -fieldsplit_0_ksp_type preonly
-fieldsplit_0_pc_type hypre -fieldsplit_0_pc_hypre_type euclid
-fieldsplit_1_ksp_type gmres -fieldsplit_1_pc_type mg -malloc_log
mlog -log_summary time.log -ksp_view
it returns me the following error message and ksp_view:
[0]PETSC ERROR: --------------------- Error Message
--------------------------------------------------------------
[0]PETSC ERROR:
[0]PETSC ERROR: Must call DMShellSetGlobalVector() or
DMShellSetCreateGlobalVector()
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html
for trouble shooting.
[0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014
[0]PETSC ERROR: /home/luc/research/feap_repo/ShearBands/parfeap/feap on
a arch-opt named euler by luc Fri Nov 21 10:12:53 2014
[0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran
--with-cxx=g++ --with-debugging=0 --with-shared-libraries=0
--download-fblaslapack --download-mpich --download-parmetis
--download-metis --download-ml=yes --download-hypre
--download-superlu_dist --download-mumps --download-scalapack
[0]PETSC ERROR: #259 DMCreateGlobalVector_Shell() line 245 in
/home/luc/research/petsc-3.5.2/src/dm/impls/shell/dmshell.c
[0]PETSC ERROR: #260 DMCreateGlobalVector() line 681 in
/home/luc/research/petsc-3.5.2/src/dm/interface/dm.c
[0]PETSC ERROR: #261 DMGetGlobalVector() line 154 in
/home/luc/research/petsc-3.5.2/src/dm/interface/dmget.c
KSP Object: 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: 1 MPI processes
type: fieldsplit
FieldSplit with Schur preconditioner, factorization FULL
Preconditioner for the Schur complement formed from Sp, an
assembled approximation to S, which uses (the lumped) A00's diagonal's
inverse
Split info:
Split number 0 Defined by IS
Split number 1 Defined by IS
KSP solver for A00 block
KSP Object: (fieldsplit_0_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI processes
type: hypre
HYPRE Euclid preconditioning
HYPRE Euclid: number of levels 1
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: seqaij
rows=2000, cols=2000
total: nonzeros=40000, allocated nonzeros=40000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
KSP solver for S = A11 - A10 inv(A00) A01
KSP Object: (fieldsplit_1_) 1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using PRECONDITIONED norm type for convergence test
PC Object: (fieldsplit_1_) 1 MPI processes
type: mg
MG: type is MULTIPLICATIVE, levels=1 cycles=v
Cycles per PCApply=1
Not using Galerkin computed coarse grid matrices
Coarse grid solver -- level -------------------------------
KSP Object: (fieldsplit_1_mg_levels_0_) 1 MPI processes
type: chebyshev
Chebyshev: eigenvalue estimates: min = 1.70057, max =
18.7063
Chebyshev: estimated using: [0 0.1; 0 1.1]
KSP Object: (fieldsplit_1_mg_levels_0_est_)
1 MPI processes
type: gmres
GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement
GMRES: happy breakdown tolerance 1e-30
maximum iterations=10, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_1_mg_levels_0_) 1
MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local
iterations = 1, omega = 1
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_1_) 1 MPI processes
type: schurcomplement
rows=330, cols=330
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_1_)
1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during
MatSetValues calls =0
using I-node routines: found 121 nodes, limit
used is 5
A10
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=2000
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during
MatSetValues calls =0
using I-node routines: found 121 nodes, limit
used is 5
KSP of A00
KSP Object: (fieldsplit_0_)
1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_)
1 MPI processes
type: hypre
HYPRE Euclid preconditioning
HYPRE Euclid: number of levels 1
linear system matrix = precond matrix:
Mat Object:
(fieldsplit_0_) 1 MPI processes
type: seqaij
rows=2000, cols=2000
total: nonzeros=40000, allocated nonzeros=40000
total number of mallocs used during
MatSetValues calls =0
using I-node routines: found 400 nodes,
limit used is 5
A01
Mat Object: 1 MPI processes
type: seqaij
rows=2000, cols=330
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during
MatSetValues calls =0
using I-node routines: found 400 nodes, limit
used is 5
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
maximum iterations=1, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50, divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_1_mg_levels_0_) 1 MPI processes
type: sor
SOR: type = local_symmetric, iterations = 1, local
iterations = 1, omega = 1
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_1_) 1 MPI
processes
type: schurcomplement
rows=330, cols=330
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_1_) 1 MPI
processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues
calls =0
using I-node routines: found 121 nodes, limit
used is 5
A10
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=2000
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during MatSetValues
calls =0
using I-node routines: found 121 nodes, limit
used is 5
KSP of A00
KSP Object: (fieldsplit_0_) 1 MPI
processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI
processes
type: hypre
HYPRE Euclid preconditioning
HYPRE Euclid: number of levels 1
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1
MPI processes
type: seqaij
rows=2000, cols=2000
total: nonzeros=40000, allocated nonzeros=40000
total number of mallocs used during MatSetValues
calls =0
using I-node routines: found 400 nodes, limit
used is 5
A01
Mat Object: 1 MPI processes
type: seqaij
rows=2000, cols=330
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during MatSetValues
calls =0
using I-node routines: found 400 nodes, limit
used is 5
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
linear system matrix followed by preconditioner matrix:
Mat Object: (fieldsplit_1_) 1 MPI processes
type: schurcomplement
rows=330, cols=330
Schur complement A11 - A10 inv(A00) A01
A11
Mat Object: (fieldsplit_1_) 1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
A10
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=2000
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
KSP of A00
KSP Object: (fieldsplit_0_) 1 MPI processes
type: preonly
maximum iterations=10000, initial guess is zero
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000
left preconditioning
using NONE norm type for convergence test
PC Object: (fieldsplit_0_) 1 MPI processes
type: hypre
HYPRE Euclid preconditioning
HYPRE Euclid: number of levels 1
linear system matrix = precond matrix:
Mat Object: (fieldsplit_0_) 1 MPI processes
type: seqaij
rows=2000, cols=2000
total: nonzeros=40000, allocated nonzeros=40000
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
A01
Mat Object: 1 MPI processes
type: seqaij
rows=2000, cols=330
total: nonzeros=22800, allocated nonzeros=22800
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 400 nodes, limit used is 5
Mat Object: 1 MPI processes
type: seqaij
rows=330, cols=330
total: nonzeros=7642, allocated nonzeros=7642
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 121 nodes, limit used is 5
linear system matrix = precond matrix:
Mat Object: 1 MPI processes
type: seqaij
rows=2330, cols=2330
total: nonzeros=93242, allocated nonzeros=93242
total number of mallocs used during MatSetValues calls =0
using I-node routines: found 521 nodes, limit used is 5
I am not completely surprised by this since the multigrid algorithm
might want to know the structure of my vector in order to do a good job
at restrictions and prolongations, etc... I am just not sure when I
should call: DMShellSetGlobalVector() or DMShellSetCreateGlobalVector().
If I call DMShellSetGlobalVector() I assume that I need to generate my
vector somehow before hand but I don't know what is required to make it
a valid "template" vector?
If I call DMShellSetCreateGlobalVector() I need to pass it a functions
that computes a vector but what kind of vector? A residual or a
"template"? This is not very clear to me...
--
Best,
Luc
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141121/669e3d64/attachment-0001.html>
More information about the petsc-users
mailing list