[petsc-users] L1 or projection type regularization with PETSc
Matthew Knepley
knepley at gmail.com
Thu Apr 7 14:24:17 CDT 2016
On Thu, Apr 7, 2016 at 2:13 PM, Lingyun Qiu <qiu.lingyun at gmail.com> wrote:
> Dear all,
>
> I am working an optimization problem as
> min_x ||Ax - b||_2^2 + alpha ||x||_1
>
> For the fidelity term, we use L2 norm. We use L1 norm for the
> regularization term. Without regularization term, i.e., alpha=0, we
> iteratively solve the problem as
> x_k+1 = KSP(x_k).
>
> I plan to use the split Bregman method to solve the regularized problem.
> It reads as,
> y_k+1 = KSP(x_k)
> x_k+1 = B(y_k+1)
> Here B() is the function related to the Bregman method. It works as a
> post-processing of the iterates.
>
> I am wondering is there a way to combine this post-processing with the KSP
> solver? A brute-force way is modify the initial guess and set the max
> iteration number to 1.
>
Are you asking for something like this:
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetPostSolve.html
Thanks,
Matt
> This is also related to the projection type regularization:
> min_{x in subspace G} ||Ax-b||^2_2
> The scheme is
> y_k+1 = KSP(x_k)
> x_k+1 = P_G(y_k+1)
> where P_G is the projection to subspace G.
>
> Lingyun Qiu
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20160407/277b672b/attachment.html>
More information about the petsc-users
mailing list