[petsc-users] [tao] General L1,L2 optimization

Isaac, Tobin G tisaac at cc.gatech.edu
Fri Jan 25 12:11:02 CST 2019


On Fri, Jan 25, 2019 at 04:29:11PM +0000, Dener, Alp via petsc-users wrote:
> Hi Hansol,
> 
> We don’t have a Levenberg-Marquardt method available, and if the PETSc/TAO manual says otherwise, that may be misleading. Let me know where you saw that and I can take a look and fix it.
> 
> In the meantime, if you want to solve a least-squares problem, the master branch of PETSc on Bitbucket has a bound-constrained regularized Gauss-Newton (TAOBRGN) method available. The only available regularization right now is an L2 proximal point Tikhonov regularizer. There are ongoing efforts to support an L1 regularizer, and also the ability for users to define their own, but these have not made it into the master branch yet. We’re working on it and should be in for the next major PETSc release in the Spring.
> 
> If you’d like to use that method, you need to set the Tao type to TAOBRGN and then go through the TaoSetResidualRoutine() and TaoSetJacobianResidualRoutine() interfaces to define your problem.
> 
> In general, you can use other TAO algorithms (e.g.: BNLS, BQNLS, etc.) with your own regularization term by embedding it into the objective, gradient and Hessian (if applicable) evaluation callbacks. The caveat is that your regularizer needs to be C1 continuous for first-order methods and C2 continuous for second order methods. This typically limits you to L2-norm regularizers. There is no support yet for L1-norm regularizers, but as I said, we’re working on it right now and it should be available in a couple of months.

Is there a feature branch yet where we could see what is being
prepared?

While I think it is nice as a convenience for Tao to support adding l1
regularization for people, my two cents is that this should call on a
deeper structure that supports separable objectives.

For example, an implementation of total variation regularization would
apply l1 to some operator applied to the control variables, and
instead of creating a complete TV implementation in Tao, a user should
be able to express "This part (misfit) is smooth" and "This part is
not, but it is convex".

  Toby


> 
> Hope that helps,
> ——
> Alp Dener
> Argonne National Laboratory
> https://www.anl.gov/profile/alp-dener
> 
> 
> 
> On Jan 24, 2019, at 2:57 PM, David via petsc-users <petsc-users at mcs.anl.gov<mailto:petsc-users at mcs.anl.gov>> wrote:
> 
> Hi. I was wondering whether there was some kind of general consensus about
> 
> the currently-best-implemented L1, L2 norm regularization for petsc/tao
> that has been implemented.
> 
> Naively, I would shoot for Levenberg-Marquardt for some kind of random
> matrix, or even generic
> 
> finite-difference stencil problem. (but it seems like LM is yet to be
> implemented, but only on petsc manual pdf?)
> 
> Or perhaps, of the implemented ones, LMVM seems to work well, at least
> on my local machine.
> 
> In any due case, I would highly appreciate the input and opinion about
> these matters.
> 
> 
> Thanks.
> 
> 
> 
> Hansol Suh,
> 
> PhD Student
> 
> 
> Georgia Institute of Technology
> 
> 


More information about the petsc-users mailing list