[petsc-users] [tao] General L1,L2 optimization
    David 
    hsuh7 at gatech.edu
       
    Thu Jan 24 14:57:44 CST 2019
    
    
  
Hi. I was wondering whether there was some kind of general consensus about
the currently-best-implemented L1, L2 norm regularization for petsc/tao 
that has been implemented.
Naively, I would shoot for Levenberg-Marquardt for some kind of random 
matrix, or even generic
finite-difference stencil problem. (but it seems like LM is yet to be 
implemented, but only on petsc manual pdf?)
Or perhaps, of the implemented ones, LMVM seems to work well, at least 
on my local machine.
In any due case, I would highly appreciate the input and opinion about 
these matters.
Thanks.
Hansol Suh,
PhD Student
Georgia Institute of Technology
    
    
More information about the petsc-users
mailing list