<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Nov 21, 2014 at 9:55 AM, Luc Berger-Vergiat <span dir="ltr"><<a href="mailto:lb2653@columbia.edu" target="_blank">lb2653@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
Hi all,<br>
I am using a DMShell to create to use a fieldsplit preconditioner.<br>
I would like to try some of petsc's multigrid options: mg and gamg.<br></div></blockquote><div><br></div><div>Lets start with gamg, since that does not need anything else from the user. If you want to use mg,</div><div>then you will need to supply the interpolation/restriction operators since we cannot calculate them</div><div>without an idea of the discretization.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
When I call my preconditioner:<br>
<blockquote><font face="Courier New, Courier, monospace">-ksp_type
gmres -pc_type fieldsplit -pc_fieldsplit_type schur
-pc_fieldsplit_schur_factorization_type full
-pc_fieldsplit_schur_precondition selfp -pc_fieldsplit_0_fields
2,3 -pc_fieldsplit_1_fields 0,1 -fieldsplit_0_ksp_type preonly
-fieldsplit_0_pc_type hypre -fieldsplit_0_pc_hypre_type euclid
-fieldsplit_1_ksp_type gmres -fieldsplit_1_pc_type mg
-malloc_log mlog -log_summary time.log -ksp_view</font><br>
</blockquote>
it returns me the following error message and ksp_view:<br>
<br>
<font face="Courier New, Courier, monospace">[0]PETSC ERROR:
--------------------- Error Message
--------------------------------------------------------------<br>
[0]PETSC ERROR: <br>
[0]PETSC ERROR: Must call DMShellSetGlobalVector() or
DMShellSetCreateGlobalVector()<br>
[0]PETSC ERROR: See
<a href="http://www.mcs.anl.gov/petsc/documentation/faq.html" target="_blank">http://www.mcs.anl.gov/petsc/documentation/faq.html</a> for trouble
shooting.<br>
[0]PETSC ERROR: Petsc Release Version 3.5.2, Sep, 08, 2014 <br>
[0]PETSC ERROR:
/home/luc/research/feap_repo/ShearBands/parfeap/feap on a arch-opt
named euler by luc Fri Nov 21 10:12:53 2014<br>
[0]PETSC ERROR: Configure options --with-cc=gcc --with-fc=gfortran
--with-cxx=g++ --with-debugging=0 --with-shared-libraries=0
--download-fblaslapack --download-mpich --download-parmetis
--download-metis --download-ml=yes --download-hypre
--download-superlu_dist --download-mumps --download-scalapack<br>
[0]PETSC ERROR: #259 DMCreateGlobalVector_Shell() line 245 in
/home/luc/research/petsc-3.5.2/src/dm/impls/shell/dmshell.c<br>
[0]PETSC ERROR: #260 DMCreateGlobalVector() line 681 in
/home/luc/research/petsc-3.5.2/src/dm/interface/dm.c<br>
[0]PETSC ERROR: #261 DMGetGlobalVector() line 154 in
/home/luc/research/petsc-3.5.2/src/dm/interface/dmget.c<br>
KSP Object: 1 MPI processes<br>
type: gmres<br>
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt
Orthogonalization with no iterative refinement<br>
GMRES: happy breakdown tolerance 1e-30<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-08, absolute=1e-16, divergence=1e+16<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
PC Object: 1 MPI processes<br>
type: fieldsplit<br>
FieldSplit with Schur preconditioner, factorization FULL<br>
Preconditioner for the Schur complement formed from Sp, an
assembled approximation to S, which uses (the lumped) A00's
diagonal's inverse<br>
Split info:<br>
Split number 0 Defined by IS<br>
Split number 1 Defined by IS<br>
KSP solver for A00 block<br>
KSP Object: (fieldsplit_0_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object: (fieldsplit_0_) 1 MPI processes<br>
type: hypre<br>
HYPRE Euclid preconditioning<br>
HYPRE Euclid: number of levels 1<br>
linear system matrix = precond matrix:<br>
Mat Object: (fieldsplit_0_) 1 MPI processes<br>
type: seqaij<br>
rows=2000, cols=2000<br>
total: nonzeros=40000, allocated nonzeros=40000<br>
total number of mallocs used during MatSetValues calls
=0<br>
using I-node routines: found 400 nodes, limit used is
5<br>
KSP solver for S = A11 - A10 inv(A00) A01 <br>
KSP Object: (fieldsplit_1_) 1 MPI processes<br>
type: gmres<br>
GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement<br>
GMRES: happy breakdown tolerance 1e-30<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using PRECONDITIONED norm type for convergence test<br>
PC Object: (fieldsplit_1_) 1 MPI processes<br>
type: mg<br>
MG: type is MULTIPLICATIVE, levels=1 cycles=v<br>
Cycles per PCApply=1<br>
Not using Galerkin computed coarse grid matrices<br>
Coarse grid solver -- level
-------------------------------<br>
KSP Object:
(fieldsplit_1_mg_levels_0_) 1 MPI processes<br>
type: chebyshev<br>
Chebyshev: eigenvalue estimates: min = 1.70057, max
= 18.7063<br>
Chebyshev: estimated using: [0 0.1; 0 1.1]<br>
KSP Object:
(fieldsplit_1_mg_levels_0_est_) 1 MPI processes<br>
type: gmres<br>
GMRES: restart=30, using Classical (unmodified)
Gram-Schmidt Orthogonalization with no iterative refinement<br>
GMRES: happy breakdown tolerance 1e-30<br>
maximum iterations=10, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object:
(fieldsplit_1_mg_levels_0_) 1 MPI processes<br>
type: sor<br>
SOR: type = local_symmetric, iterations = 1,
local iterations = 1, omega = 1<br>
linear system matrix followed by preconditioner
matrix:<br>
Mat Object:
(fieldsplit_1_) 1 MPI processes<br>
type: schurcomplement<br>
rows=330, cols=330<br>
Schur complement A11 - A10 inv(A00) A01<br>
A11<br>
Mat Object:
(fieldsplit_1_) 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=330<br>
total: nonzeros=7642, allocated
nonzeros=7642<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 121 nodes,
limit used is 5<br>
A10<br>
Mat Object: 1 MPI
processes<br>
type: seqaij<br>
rows=330, cols=2000<br>
total: nonzeros=22800, allocated
nonzeros=22800<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 121 nodes,
limit used is 5<br>
KSP of A00<br>
KSP Object:
(fieldsplit_0_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is
zero<br>
tolerances: relative=1e-05,
absolute=1e-50, divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object:
(fieldsplit_0_) 1 MPI processes<br>
type: hypre<br>
HYPRE Euclid preconditioning<br>
HYPRE Euclid: number of levels 1<br>
linear system matrix = precond matrix:<br>
Mat Object:
(fieldsplit_0_) 1 MPI processes<br>
type: seqaij<br>
rows=2000, cols=2000<br>
total: nonzeros=40000, allocated
nonzeros=40000<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 400
nodes, limit used is 5<br>
A01<br>
Mat Object: 1 MPI
processes<br>
type: seqaij<br>
rows=2000, cols=330<br>
total: nonzeros=22800, allocated
nonzeros=22800<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 400 nodes,
limit used is 5<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=330<br>
total: nonzeros=7642, allocated nonzeros=7642<br>
total number of mallocs used during MatSetValues
calls =0<br>
using I-node routines: found 121 nodes, limit
used is 5<br>
maximum iterations=1, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object:
(fieldsplit_1_mg_levels_0_) 1 MPI processes<br>
type: sor<br>
SOR: type = local_symmetric, iterations = 1, local
iterations = 1, omega = 1<br>
linear system matrix followed by preconditioner
matrix:<br>
Mat Object: (fieldsplit_1_) 1
MPI processes<br>
type: schurcomplement<br>
rows=330, cols=330<br>
Schur complement A11 - A10 inv(A00) A01<br>
A11<br>
Mat Object:
(fieldsplit_1_) 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=330<br>
total: nonzeros=7642, allocated nonzeros=7642<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 121 nodes,
limit used is 5<br>
A10<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=2000<br>
total: nonzeros=22800, allocated
nonzeros=22800<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 121 nodes,
limit used is 5<br>
KSP of A00<br>
KSP Object:
(fieldsplit_0_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is
zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object:
(fieldsplit_0_) 1 MPI processes<br>
type: hypre<br>
HYPRE Euclid preconditioning<br>
HYPRE Euclid: number of levels 1<br>
linear system matrix = precond matrix:<br>
Mat Object:
(fieldsplit_0_) 1 MPI processes<br>
type: seqaij<br>
rows=2000, cols=2000<br>
total: nonzeros=40000, allocated
nonzeros=40000<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 400 nodes,
limit used is 5<br>
A01<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=2000, cols=330<br>
total: nonzeros=22800, allocated
nonzeros=22800<br>
total number of mallocs used during
MatSetValues calls =0<br>
using I-node routines: found 400 nodes,
limit used is 5<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=330<br>
total: nonzeros=7642, allocated nonzeros=7642<br>
total number of mallocs used during MatSetValues
calls =0<br>
using I-node routines: found 121 nodes, limit used
is 5<br>
linear system matrix followed by preconditioner matrix:<br>
Mat Object: (fieldsplit_1_) 1 MPI processes<br>
type: schurcomplement<br>
rows=330, cols=330<br>
Schur complement A11 - A10 inv(A00) A01<br>
A11<br>
Mat Object:
(fieldsplit_1_) 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=330<br>
total: nonzeros=7642, allocated nonzeros=7642<br>
total number of mallocs used during MatSetValues
calls =0<br>
using I-node routines: found 121 nodes, limit
used is 5<br>
A10<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=2000<br>
total: nonzeros=22800, allocated nonzeros=22800<br>
total number of mallocs used during MatSetValues
calls =0<br>
using I-node routines: found 121 nodes, limit
used is 5<br>
KSP of A00<br>
KSP Object:
(fieldsplit_0_) 1 MPI processes<br>
type: preonly<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50,
divergence=10000<br>
left preconditioning<br>
using NONE norm type for convergence test<br>
PC Object:
(fieldsplit_0_) 1 MPI processes<br>
type: hypre<br>
HYPRE Euclid preconditioning<br>
HYPRE Euclid: number of levels 1<br>
linear system matrix = precond matrix:<br>
Mat Object:
(fieldsplit_0_) 1 MPI processes<br>
type: seqaij<br>
rows=2000, cols=2000<br>
total: nonzeros=40000, allocated nonzeros=40000<br>
total number of mallocs used during MatSetValues
calls =0<br>
using I-node routines: found 400 nodes, limit
used is 5<br>
A01<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=2000, cols=330<br>
total: nonzeros=22800, allocated nonzeros=22800<br>
total number of mallocs used during MatSetValues
calls =0<br>
using I-node routines: found 400 nodes, limit
used is 5<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=330, cols=330<br>
total: nonzeros=7642, allocated nonzeros=7642<br>
total number of mallocs used during MatSetValues calls
=0<br>
using I-node routines: found 121 nodes, limit used is
5<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=2330, cols=2330<br>
total: nonzeros=93242, allocated nonzeros=93242<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node routines: found 521 nodes, limit used is 5</font><br>
<br>
I am not completely surprised by this since the multigrid algorithm
might want to know the structure of my vector in order to do a good
job at restrictions and prolongations, etc... I am just not sure
when I should call: <font face="Courier New, Courier, monospace">DMShellSetGlobalVector()
or DMShellSetCreateGlobalVector()</font>.<br>
If I call <font face="Courier New, Courier, monospace">DMShellSetGlobalVector()</font>
I assume that I need to generate my vector somehow before hand but I
don't know what is required to make it a valid "template" vector?<br>
If I call <font face="Courier New, Courier, monospace">DMShellSetCreateGlobalVector()
<font face="sans-serif">I need to pass it a functions that
computes a vector but what kind of vector? A residual or a
"template"? This is not very clear to me...<span class="HOEnZb"><font color="#888888"><br>
<br>
</font></span></font></font><span class="HOEnZb"><font color="#888888">
<pre cols="72">--
Best,
Luc</pre>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>