<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Jul 28, 2015 at 5:55 AM, Romain Jolivet <span dir="ltr"><<a href="mailto:jolivetinsar@gmail.com" target="_blank">jolivetinsar@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear PETSc developers and users,<br>
<br>
I am trying to solve a linear least square problem that requires some regularization. My plan is to use a Tikhonov scheme as a first try.<br>
Hence, I would like to minimise the function S(x) defined as:<br>
<br>
S(x) = ||Ax - b||2 + alpha || Cx ||<br>
<br>
where A is my operator (rectangular matrix with far more lines than columns), x is the vector of unknown, b is some data vector, alpha is a damping factor and C some operator of my own.<br>
Ideally C is a gaussian form of a covariance matrix. But at first, I would like to check what happens with an identity (good), gradient (better) or laplacian (best) operator.<br>
There is a mention to a MatShift function in some previous post in the mailing list concerning Tikhonov regularization, but I don’t see how to incorporate that into my python set of tools that I have set up to solve my problem (would there be a useful example somewhere?).<br></blockquote><div><br></div><div>It would be nice to have an example for this, but I have not made it there yet. For now, it looks like</div><div>what you want is</div><div><br></div><div> A.shift(aC)</div><div><br></div><div>where A is the Mat that you are currently using in your linear solve, and aC is alpha*C. Does this make sense?</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
I feel like there is some possibilities using the TAO wrappers of petsc4py, but I have no idea how to get this guy to work (how do I create the TAO_objective, TAO_gradient, etc?).<br>
<br>
To get a better understanding of my problem, my ultimate goal is to minimise the function S(x) defined as<br>
<br>
S(x) = 1/2. * [(Ax-b)’ inv(D) (Ax-b) + (x-x0)’ inv(M) (x-x0)],<br>
<br>
where D is some covariance matrix describing noise distribution on b, M is the covariance matrix describing the correlations between the parameters of x and x0 is the starting point.<br>
But that seems a bit complex so far and I will stick to the first problem yet.<br>
<br>
Cheers,<br>
Romain<br>
<br>
<br>
<br>
—————————————————————————————————————<br>
—————————————————————————————————————<br>
Romain Jolivet<br>
Postdoctoral Fellow<br>
<br>
University of Cambridge<br>
Department of Earth Sciences<br>
Bullard Labs<br>
Madingley Rise<br>
Madingley Road<br>
Cambridge CB3 0EZ<br>
United Kingdom<br>
<br>
email: <a href="mailto:rpj29@cam.ac.uk">rpj29@cam.ac.uk</a><br>
Phone: <a href="tel:%2B44%201223%20748%20938" value="+441223748938">+44 1223 748 938</a><br>
Mobile: <a href="tel:%2B44%207596%20703%20148" value="+447596703148">+44 7596 703 148</a><br>
<br>
France: <a href="tel:%2B33%206%2052%2091%2076%2039" value="+33652917639">+33 6 52 91 76 39</a><br>
US: <a href="tel:%2B1%20%28626%29%20560%206356" value="+16265606356">+1 (626) 560 6356</a><br>
—————————————————————————————————————<br>
—————————————————————————————————————<br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>