[petsc-users] Redundant Jacobian evaluations in tao-brgn

Matt Landreman matt.landreman at gmail.com
Mon Dec 30 08:44:01 CST 2019


Hi, I have a question about the TAO BRGN algorithm for least-squares
optimization. It seems to be calling the user-supplied Jacobian routine
twice for each point in the parameter space, unnecessarily increasing the
amount of computation. This can be seen by inserting the lines
printf("EvaluateFunction called at x = %.15e, %.15e,
%.15e\n",x[0],x[1],x[2]);
and
printf("EvaluateJacobian called at x = %.15e, %.15e,
%.15e\n",x[0],x[1],x[2]);
into the petsc example src/tao/leastsquares/examples/tutorials/chwirut1.c
at line numbers 128 and 150 respectively. Then running
./chwirut1 -tao_type brgn
the output is
EvaluateFunction called at x = 1.500000000000000e-01,
8.000000000000000e-03, 1.000000000000000e-02
EvaluateJacobian called at x = 1.500000000000000e-01,
8.000000000000000e-03, 1.000000000000000e-02
EvaluateJacobian called at x = 1.500000000000000e-01,
8.000000000000000e-03, 1.000000000000000e-02
EvaluateFunction called at x = 1.678609230300274e-01,
5.606199685573443e-03, 1.143975621188315e-02
EvaluateJacobian called at x = 1.678609230300274e-01,
5.606199685573443e-03, 1.143975621188315e-02
EvaluateJacobian called at x = 1.678609230300274e-01,
5.606199685573443e-03, 1.143975621188315e-02
EvaluateFunction called at x = 1.895677598936882e-01,
6.141611118905459e-03, 1.052387574144709e-02
EvaluateJacobian called at x = 1.895677598936882e-01,
6.141611118905459e-03, 1.052387574144709e-02
EvaluateJacobian called at x = 1.895677598936882e-01,
6.141611118905459e-03, 1.052387574144709e-02
EvaluateFunction called at x = 1.901553874099915e-01,
6.129649283393148e-03, 1.053528933460132e-02
EvaluateJacobian called at x = 1.901553874099915e-01,
6.129649283393148e-03, 1.053528933460132e-02
EvaluateJacobian called at x = 1.901553874099915e-01,
6.129649283393148e-03, 1.053528933460132e-02
EvaluateFunction called at x = 1.902758303510814e-01,
6.131413563754055e-03, 1.053093200323840e-02
EvaluateJacobian called at x = 1.902758303510814e-01,
6.131413563754055e-03, 1.053093200323840e-02
EvaluateJacobian called at x = 1.902758303510814e-01,
6.131413563754055e-03, 1.053093200323840e-02
EvaluateFunction called at x = 1.902779152522651e-01,
6.131396913966817e-03, 1.053091761136939e-02
EvaluateJacobian called at x = 1.902779152522651e-01,
6.131396913966817e-03, 1.053091761136939e-02
EvaluateJacobian called at x = 1.902779152522651e-01,
6.131396913966817e-03, 1.053091761136939e-02
EvaluateFunction called at x = 1.902781772105115e-01,
6.131400452825695e-03, 1.053090850459907e-02
EvaluateJacobian called at x = 1.902781772105115e-01,
6.131400452825695e-03, 1.053090850459907e-02
EvaluateJacobian called at x = 1.902781772105115e-01,
6.131400452825695e-03, 1.053090850459907e-02
EvaluateFunction called at x = 1.902781831060656e-01,
6.131400440501068e-03, 1.053090841847083e-02
EvaluateJacobian called at x = 1.902781831060656e-01,
6.131400440501068e-03, 1.053090841847083e-02
EvaluateJacobian called at x = 1.902781831060656e-01,
6.131400440501068e-03, 1.053090841847083e-02
EvaluateFunction called at x = 1.902781836794117e-01,
6.131400447676857e-03, 1.053090839927484e-02
EvaluateJacobian called at x = 1.902781836794117e-01,
6.131400447676857e-03, 1.053090839927484e-02
EvaluateJacobian called at x = 1.902781836794117e-01,
6.131400447676857e-03, 1.053090839927484e-02
EvaluateFunction called at x = 1.902781836950446e-01,
6.131400447696052e-03, 1.053090839897943e-02
EvaluateJacobian called at x = 1.902781836950446e-01,
6.131400447696052e-03, 1.053090839897943e-02
This shows that every call to EvaluateJacobian (except the last) is
actually two calls at the same point. Is there a way to eliminate these
redundant Jacobian evaluations?

Thanks,
Matt Landreman


P.S. I'm using petsc 3.12.2, and in case it's helpful, the output of
-tao_view is copied here:
Tao Object: 1 MPI processes
  type: brgn
    Tao Object: (tao_brgn_subsolver_) 1 MPI processes
      type: bnls
        Rejected BFGS updates: 0
        CG steps: 0
        Newton steps: 9
        BFGS steps: 0
        Scaled gradient steps: 0
        Gradient steps: 0
        KSP termination reasons:
          atol: 0
          rtol: 9
          ctol: 0
          negc: 0
          dtol: 0
          iter: 0
          othr: 0
      TaoLineSearch Object: (tao_brgn_subsolver_) 1 MPI processes
        type: more-thuente
        maximum function evaluations=30
        tolerances: ftol=0.0001, rtol=1e-10, gtol=0.9
        total number of function evaluations=0
        total number of gradient evaluations=0
        total number of function/gradient evaluations=1
        Termination reason: 1
      KSP Object: (tao_brgn_subsolver_) 1 MPI processes
        type: stcg
        maximum iterations=10000, initial guess is zero
        tolerances:  relative=1e-05, absolute=1e-50, divergence=10000.
        left preconditioning
        using UNPRECONDITIONED norm type for convergence test
      PC Object: (tao_brgn_subsolver_) 1 MPI processes
        type: lmvm
        Mat Object: (pc_lmvm_) 1 MPI processes
          type: lmvmbfgs
          rows=3, cols=3
            Scale type: diagonal
            Scale history: 1
            Scale params: alpha=1., beta=0.5, rho=1.
            Convex factors: phi=0., theta=0.125
            Max. storage: 5
            Used storage: 5
            Number of updates: 8
            Number of rejects: 0
            Number of resets: 0
            Mat Object: (J0_) 1 MPI processes
              type: lmvmdiagbrdn
              rows=3, cols=3
                Scale history: 1
                Scale params: alpha=1., beta=0.5, rho=1.
                Convex factor: theta=0.125
                Max. storage: 1
                Used storage: 1
                Number of updates: 8
                Number of rejects: 0
                Number of resets: 0
        linear system matrix = precond matrix:
        Mat Object: 1 MPI processes
          type: shell
          rows=3, cols=3
      total KSP iterations: 27
      Active Set subset type: subvec
      convergence tolerances: gatol=1e-08,       steptol=0.,       gttol=0.
      Residual in Function/Gradient:=4.30981e-07
      Objective value=1192.24
      total number of iterations=9,                              (max: 2000)
      total number of function evaluations=10,                      max:
4000
      total number of function/gradient evaluations=10,          (max: 4000)
      total number of Hessian evaluations=9
      total number of Jacobian evaluations=19
      Solution converged:        ||g(X)||/|f(X)| <= grtol
  convergence tolerances: gatol=1e-08,   steptol=0.,   gttol=0.
  Residual in Function/Gradient:=0.
  Objective value=0.
  total number of iterations=9,                          (max: 2000)
  total number of function evaluations=10,                  max: 4000
  total number of function/gradient evaluations=10,      (max: 4000)
  total number of Hessian evaluations=9
  Solution converged:    ||g(X)||/|f(X)| <= grtol

-- 
=======================================
Dr. Matt Landreman
Associate Research Scientist
Institute for Research in Electronics & Applied Physics
University of Maryland
8223 Paint Branch Drive, College Park MD 20742, USA
(+1) 651-366-9306
mattland at umd.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20191230/7d80e771/attachment.html>


More information about the petsc-users mailing list