# [petsc-users] questions about the SNES Function Norm

Xiangdong epscodes at gmail.com
Sun May 4 15:32:55 CDT 2014

```On Fri, May 2, 2014 at 4:10 PM, Matthew Knepley <knepley at gmail.com> wrote:

> On Fri, May 2, 2014 at 12:53 PM, Xiangdong <epscodes at gmail.com> wrote:
>>
>> On Thu, May 1, 2014 at 10:21 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>>
>>> On May 1, 2014, at 9:12 PM, Xiangdong <epscodes at gmail.com> wrote:
>>>
>>> > I came up with a simple example to demonstrate this "eliminating row"
>>> behavior. It happens when the solution x to the linearized equation Ax=b is
>>> out of the bound set by SNESVISetVariableBounds();
>>> >
>>> > In the attached example, I use snes to solve a simple function x-b=0.
>>> When you run it, it outputs the matrix as 25 rows, while the real Jacobian
>>> should be 5*5*2=50 rows. If one changes the lower bound in line 125 to be
>>> -inf, it will output 50 rows for the Jacobian. In the first case, the norm
>>> given by SNESGetFunctionNorm and SNESGetFunction+VecNorm are also different.
>>> >
>>> > In solving the nonlinear equations, it is likely that the solution of
>>> the linearized equation is out of bound, but then we can reset the
>>> out-of-bound solution to be lower or upper bound instead of eliminating the
>>> variables (the rows). Any suggestions on doing this in petsc?
>>>
>>>    This is what PETSc is doing. It is using the "active set method".
>>> Variables that are at their bounds are “frozen” and then a smaller system
>>> is solved (involving just the variables not a that bounds) to get the next
>>> search direction.  Based on the next search direction some of the variables
>>> on the bounds may be unfrozen and other variables may be frozen. There is a
>>> huge literature on this topic. See for example our buddies    • Nocedal,
>>> Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin,
>>> New York: Springer-Verlag. ISBN 978-0-387-30303-1..
>>>
>>>    The SNESGetFunctionNorm and SNESGetFunction+VecNorm  may return
>>> different values with the SNES VI solver. If you care about the function
>>> value just use SNESGetFunction() and compute the norm that way. We are
>>> eliminating SNESGetFunctionNorm() from PETSc because it is problematic.
>>>
>>>    If you think the SNES VI solver is actually not solving the problem,
>>> or giving the wrong answer than please send us the entire simple code and
>>> we’ll see if we have introduced any bugs into our solver. But note that the
>>> linear system being of different sizes is completely normal for the solver.
>>>
>>
>> Here is an example I do not quite understand. I have a simple function
>> F(X) = [x1+x2+100 ; x1-x2; x3+x4; x3-x4+50]. If I solve this F(X)=0 with no
>> constraint, the exact solution is [x1=-50; x2=-50; x3=-25; x4=25].
>>
>> If I specify the constraint as x2>=0 and x4>=0, I expect the solution
>> from one iteration of SNES is [-50, 0,-25,25], since the constraint on x2
>> should be active now. However, the petsc outputs the solution [-50, 0, 0,
>> 0]. Since x3 and x4 does not violate the constraint, why does the solution
>> of x3 and x4 change (note that x3 and x3 are decoupled from x1 and x2)? In
>> this case, the matrix obtained from KSPGetOperators is only 2-by-2, so two
>> variables  or constraints are eliminated.
>>
>
> This just finds a local solution to the constrained problem, and these
> need not be unique.
>

This might be trivial, but could you please briefly explain how I can
obtain the same answer petsc outputs by hand calculation for this simple
four-variable example. What I do not understand is when the constraints get
activated and the variables get eliminated (matrix reduced from 4-by-4 to
2-by-2).

For example, as I mentioned before, when I added  x2>=0 and x4>=0 to the
unconstrained problem, why did two of these constraints get eliminated
(matrix from KSPGetOperators is 2-by-2)? In particular, the exact solution
x4=25 does not violate the newly added x4>=0, but still got changed (x4 is
actually decoupled from x1 and x2; changes/constraints on x1 and x2 should
not affect x4).

>
>     Matt
>
>
>> Another thing I noticed is that  constraints x2>-1e-7 and x4>-1e-7 gives
>> solution [-50,-1e-7,-25,25]; however, constraints x2>-1e-8 and x4>-1e-8
>> gives the solution [-50,0,0,0].
>>
>
Is there a small constant number in petsc that caused the jump of the
solution when I simply change the lower bound from -1e-7 to -1e-8?

Thanks for your time and help.

Best,
Xiangdong

> Attached please find the simple 130-line code showing this behavior.
>> Simply commenting the line 37 to remove the constraints and modifying line
>> 92 to change the lower bounds of x2 and x4.
>>
>> Thanks a lot for your time and help.
>>
>> Best,
>> Xiangdong
>>
>>
>>
>>
>>>
>>>
>>>   Barry
>>>
>>>
>>> >
>>> > Thank you.
>>> >
>>> > Best,
>>> > Xiangdong
>>> >
>>> > P.S. If we change the lower bound of field u (line 124) to be zero,
>>> then the Jacobian matrix is set to be NULL by petsc.
>>> >
>>> >
>>> > On Thu, May 1, 2014 at 3:43 PM, Xiangdong <epscodes at gmail.com> wrote:
>>> > Here is the order of functions I called:
>>> >
>>> > DMDACreate3d();
>>> >
>>> > SNESCreate();
>>> >
>>> > SNESSetDM();  (DM with dof=2);
>>> >
>>> > DMSetApplicationContext();
>>> >
>>> > DMDASNESSetFunctionLocal();
>>> >
>>> > SNESVISetVariableBounds();
>>> >
>>> > DMDASNESetJacobianLocal();
>>> >
>>> > SNESSetFromOptions();
>>> >
>>> > SNESSolve();
>>> >
>>> > SNESGetKSP();
>>> > KSPGetSolution();
>>> > KSPGetRhs();
>>> > KSPGetOperators();   //get operator kspA, kspx, kspb;
>>> >
>>> > SNESGetFunctionNorm();  ==> get norm fnorma;
>>> > SNESGetFunction(); VecNorm(); ==> get norm fnormb;
>>> > SNESComputeFunction(); VecNorm(); ==> function evaluation fx at the
>>> solution x and get norm fnormc;
>>> >
>>> > Inside the FormJacobianLocal(), I output the matrix jac and preB;
>>> >
>>> > I found that fnorma matches the default SNES monitor output "SNES
>>> Function norm", but fnormb=fnormc != fnorma. The solution x, the residue fx
>>> obtained by  snescomputefunction, mat jac and preB are length 50 or
>>> 50-by-50, while the kspA, kspx, kspb are 25-by-25 or length 25.
>>> >
>>> > I checked that kspA=jac(1:2:end,1:2:end) and x(1:2:end)= kspA\kspb;
>>> x(2:2:end)=0; It seems that it completely ignores the second degree of
>>> freedom (setting it to zero). I saw this for (close to) constant initial
>>> guess, while for heterogeneous initial guess, it works fine and the matrix
>>> and vector size are correct, and the solution is correct. So this
>>> eliminating row behavior seems to be initial guess dependent.
>>> >
>>> > I saw this even if I use snes_fd, so we can rule out the possibility
>>> of wrong Jacobian. For the FormFunctionLocal(), I checked via
>>> SNESComputeFunction and it output the correct vector of residue.
>>> >
>>> > Are the orders of function calls correct?
>>> >
>>> > Thank you.
>>> >
>>> > Xiangdong
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> >
>>> > On Thu, May 1, 2014 at 1:58 PM, Barry Smith <bsmith at mcs.anl.gov>
>>> wrote:
>>> >
>>> > On May 1, 2014, at 10:32 AM, Xiangdong <epscodes at gmail.com> wrote:
>>> >
>>> > > Under what condition, SNESGetFunctionNorm() will output different
>>> results from SENEGetFunction + VecNorm (with NORM_2)?
>>> > >
>>> > > For most of my test cases, it is the same. However, when I have some
>>> special (trivial) initial guess to the SNES problem, I see different norms.
>>> >
>>> >    Please send more details on your “trivial” case where the values
>>> are different. It could be that we are not setting the function norm
>>> properly on early exit from the solvers.
>>> > >
>>> > > Another phenomenon I noticed with this is that KSP in SNES squeeze
>>> my matrix by eliminating rows. I have a Jacobian supposed to be 50-by-50.
>>> When I use KSPGetOperators/rhs/solutions, I found that the operator is
>>> 25-by-25, and the rhs and solution is with length 25. Do you have any clue
>>> on what triggered this? To my surprise, when I output the Jacobian inside
>>> the FormJacobianLocal, it outputs the correct matrix 50-by-50 with correct
>>> numerical entries. Why does the operator obtained from KSP is different and
>>> got rows eliminated? These rows got eliminated have only one entries per
>>> row, but the rhs in that row is not zero. Eliminating these rows would give
>>> wrong solutions.
>>> >
>>> >    Hmm, we never squeeze out rows/columns from the Jacobian. The size
>>> of the Jacobian set with SNESSetJacobian() should always match that
>>> obtained with KSPGetOperators() on the linear system.   Please send more
>>> details on how you get this. Are you calling the KSPGetOperators() inside a
>>> preconditioner where the the preconditioner has chopped up the operator?
>>> >
>>> >    Barry
>>> >
>>> > >
>>> > > Thank you.
>>> > >
>>> > > Xiangdong
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > On Tue, Apr 29, 2014 at 3:12 PM, Matthew Knepley <knepley at gmail.com>
>>> wrote:
>>> > > On Tue, Apr 29, 2014 at 2:09 PM, Xiangdong <epscodes at gmail.com>
>>> wrote:
>>> > > It turns out to a be a bug  in my FormFunctionLocal(DMDALocalInfo
>>> *info,PetscScalar **x,PetscScalar **f,AppCtx *user). I forgot to initialize
>>> the array f. Zero the array f solved the problem and gave consistent result.
>>> > >
>>> > > Just curious, why does not petsc initialize the array f to zero by
>>> default inside petsc when passing the f array to FormFunctionLocal?
>>> > >
>>> > > If you directly set entires, you might not want us to spend the time
>>> writing those zeros.
>>> > >
>>> > > I have another quick question about the array x passed to
>>> FormFunctionLocal. If I want to know the which x is evaluated, how can I
>>> output x in a vector format? Currently, I created a global vector vecx and
>>> a local vector vecx_local, get the array of vecx_local_array, copy the x to
>>> vecx_local_array,  scatter to global vecx and output vecx. Is there a quick
>>> way to restore the array x to a vector and output?
>>> > >
>>> > > I cannot think of a better way than that.
>>> > >
>>> > >    Matt
>>> > >
>>> > > Thank you.
>>> > >
>>> > > Best,
>>> > > Xiangdong
>>> > >
>>> > >
>>> > >
>>> > > On Mon, Apr 28, 2014 at 10:28 PM, Barry Smith <bsmith at mcs.anl.gov>
>>> wrote:
>>> > >
>>> > > On Apr 28, 2014, at 3:23 PM, Xiangdong <epscodes at gmail.com> wrote:
>>> > >
>>> > > > Hello everyone,
>>> > > >
>>> > > > When I run snes program,
>>> > >
>>> > >                ^^^^ what SNES program”?
>>> > >
>>> > > > it outputs "SNES Function norm 1.23456789e+10". It seems that this
>>> norm is different from residue norm (even if solving F(x)=0)
>>> > >
>>> > >    Please send the full output where you see this.
>>> > >
>>> > > > and also differ from norm of the Jacobian. What is the definition
>>> of this "SNES Function Norm”?
>>> > >
>>> > >    The SNES Function Norm as printed by PETSc is suppose to the
>>> 2-norm of F(x) - b (where b is usually zero) and this is also the same
>>> thing as the “residue norm”
>>> > >
>>> > >    Barry
>>> > >
>>> > > >
>>> > > > Thank you.
>>> > > >
>>> > > > Best,
>>> > > > Xiangdong
>>> > >
>>> > >
>>> > >
>>> > >
>>> > >
>>> > > --
>>> > > What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> > > -- Norbert Wiener
>>> > >
>>> >
>>> >
>>> >
>>> > <exdemo.c>
>>>
>>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their