<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Nov 22, 2013 at 5:36 PM, Geoffrey Irving <span dir="ltr"><<a href="mailto:irving@naml.us" target="_blank">irving@naml.us</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I have a duplicate of snes ex12 (FEM Poisson) which works with<br>
Dirichlet boundary conditions, but it's breaking for me with Neumann<br>
conditions. In particular, with Neumann conditions I get results<br>
which explode even though I believe I am setting a constant nullspace.<br>
<br>
For example, if I use two first order elements (the unit square<br>
divided into two triangles), the resulting solution has<br>
<br>
L2 error = 1.75514e+08<br>
u = [-175513825.75680602, -175513825.66302037,<br>
-175513825.48390722, -175513824.84436429]<br>
<br>
This looks rather a lot like the null space isn't getting through. I<br>
am creating the constant nullspace with<br>
<br>
MatNullSpace null;<br>
CHECK(MatNullSpaceCreate(comm(),PETSC_TRUE,0,0,&null));<br>
CHECK(MatSetNullSpace(m,null));<br>
CHECK(MatNullSpaceDestroy(&null));<br>
<br>
If I pass "-ksp_view -mat_view", I get the following. The matrix<br>
entries seem right (they do indeed have the constant nullspace), and<br>
ksp_view shows that a nullspace is attached. Is attaching the<br>
nullspace to the matrix with MatSetNullSpace enough, or do I need to<br>
additionally attach it to the KSP object?<br></blockquote><div><br></div><div>1) I always run with -ksp_monitor_true_residual now when debugging. This can give</div><div> you an idea whether you have a singular PC, which I suspect here.</div>
<div><br></div><div>2) Can you try using -pc_type jacobi? I think ILU might go crazy on a deficient matrix.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thanks,<br>
Geoffrey<br>
<br>
---------------------------------------------<br>
<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
row 0: (0, 1) (1, -0.5) (2, -0.5)<br>
row 1: (0, -0.5) (1, 1) (2, 0) (3, -0.5)<br>
row 2: (0, -0.5) (1, 0) (2, 1) (3, -0.5)<br>
row 3: (1, -0.5) (2, -0.5) (3, 1)<br>
KSP Object: 1 MPI processes<br>
type: gmres<br>
GMRES: restart=30, using Classical (unmodified) Gram-Schmidt<br>
Orthogonalization with no iterative refinement<br>
GMRES: happy breakdown tolerance 1e-30<br>
maximum iterations=10000, initial guess is zero<br>
tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
left preconditioning<br>
has attached null space<br>
using PRECONDITIONED norm type for convergence test<br>
PC Object: 1 MPI processes<br>
type: ilu<br>
ILU: out-of-place factorization<br>
0 levels of fill<br>
tolerance for zero pivot 2.22045e-14<br>
using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
matrix ordering: natural<br>
factor fill ratio given 1, needed 1<br>
Factored matrix follows:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=4, cols=4<br>
package used to perform factorization: petsc<br>
total: nonzeros=14, allocated nonzeros=14<br>
total number of mallocs used during MatSetValues calls =0<br>
using I-node routines: found 3 nodes, limit used is 5<br>
linear system matrix = precond matrix:<br>
Mat Object: 1 MPI processes<br>
type: seqaij<br>
rows=4, cols=4<br>
total: nonzeros=14, allocated nonzeros=14<br>
total number of mallocs used during MatSetValues calls =0<br>
has attached null space<br>
using I-node routines: found 3 nodes, limit used is 5<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>