<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 7, 2018 at 2:24 PM, Smith, Barry F. <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
A square matrix with a null space results in an underdetermined system, that is a solution with more than one solution. The solutions can be written as x + alpha_1 v_1 + ... alpha_n v_n where the v_n form an orthonormal basis for the null space and x is orthogonal to the null space.<br>
<br>
When you provide the null space KSP Krylov methods find the norm minimizing solution (x) , that is it finds the x with the smallest norm that satisfies the system. This is exactly the same as saying you take any solution of the system and remove all the components in the directions of the null space.<br>
<br>
If you do not provide the null space then the Krylov space may find you a solution that is not the norm minimizing solution, thus that solution has a component of the null space within it. What component of the null space in the solution depends on what you use for an initial guess and right hand side.<br></blockquote><div><br></div><div>Additionally, assuming your initial guess is orthogonal to the null space, of course, your solution can "float" away from roundoff error. This is what you were seeing initially w/o the null space. As you saw you can just project it out yourself but as Barry said it is better to let KSP do it.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
When you have a preconditioner then things can get trickier because the preconditioner can (unless you remove them) components in the direction of the null space. These components can get amplified with each iteration of the Krylov method so it looks like the Krylov method is not converging since the norm of the solution is getting larger and larger (these larger components are in the null space.) This is why one should always provide the null space when solving singular systems with singular matrices.<br>
<span class="HOEnZb"><font color="#888888"><br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> On Feb 7, 2018, at 11:43 AM, Marco Cisternino <<a href="mailto:marco.cisternino@optimad.it">marco.cisternino@optimad.it</a>> wrote:<br>
><br>
> Hi everybody,<br>
> I would like to ask what solution is computed if I try to solve the linear system relative to the problem in subject without creating the null space.<br>
> I tried with and without the call to<br>
> MatNullSpaceCreate(m_<wbr>communicator, PETSC_TRUE, 0, NULL, &nullspace);<br>
> and I get zero averaged solution with and the same solution plus a constant without.<br>
> How does PETSc work in the second case?<br>
> Does it check the matrix singularity? And is it able to create the null space with the constant automatically?<br>
> Thanks.<br>
><br>
><br>
> Marco Cisternino, PhD<br>
> <a href="mailto:marco.cisternino@optimad.it">marco.cisternino@optimad.it</a><br>
> ______________________________<wbr>_<br>
> OPTIMAD Engineering srl<br>
> Via Giacinto Collegno 18, Torino, Italia.<br>
> <a href="tel:%2B3901119719782" value="+3901119719782">+3901119719782</a><br>
> <a href="http://www.optimad.it" rel="noreferrer" target="_blank">www.optimad.it</a><br>
><br>
<br>
</div></div></blockquote></div><br></div></div>