[petsc-users] Getting access to matrix rows without and setting values simultaneously

Matthew Knepley knepley at gmail.com
Thu Jun 16 10:34:03 CDT 2011


On Thu, Jun 16, 2011 at 3:16 PM, Alexander Grayver
<agrayver at gfz-potsdam.de>wrote:

>  Jed, thanks a lot for paper, it seems to be very good!
>
> You're right, there are ways to avoid calculating the whole A, but there
> are actually advantages having A computed, which you don't have without A
> being computed (at least with approaches I know so far). I know several
> technique from paper you cited, maybe they could be better (I personally
> believe they are not), but anyway now I'm at the point when no serious
> changes are reasonable.
> That is right, F is nonlinear. The adjoint in my case costs the same as
> forward problem. I can afford A for my problems. The only thing which is
> really has to be avoided is the explicit assembling of A'*A in Gauss-Newton.
> That I want to do using matrix-free formulation and one of the iterative
> solvers from petsc. Would that be possible? If not, then I really should
> think about different approach, maybe one presented in paper you cited.
>

Yes, you can do MF applications of A' A. You could just make a MatShell that
called MatMult and MatMultTranspose.

For modifying the matrix, MatGetRow() IF you are using an AIJ matrix, return
a pointer directly to the values. Thus you
could alter them in place. This is not portable, and might break later, but
at least you could run now.

  Thanks,

     Matt


> Regards,
> Alexander
>
> On 16.06.2011 16:38, Jed Brown wrote:
>
> On Thu, Jun 16, 2011 at 15:43, Alexander Grayver <agrayver at gfz-potsdam.de>wrote:
>
>> Let's say we have a discretized model of some physical parameter m (say
>> acoustic velocity). Number of model parameters is N. We need to take M
>> measurements d within model (say time traveling of acoustic wave) based on M
>> different sets of receiver/source positions.
>> In our case N >> M (e.g. 10^7 >> 10^3).
>> We have operator F (which might linear or not) that defines relationship
>> between m and d:
>> d=F(m)
>>
>> The operator F is just a system of equations actually. I have no problem
>> now to solve it for any m.
>>
>> What I need now is to compute the variation of this operator:
>>
>> A = \frac{\partial d_i}{\partial m_j}, for all i=1..M, j=1..N
>>
>> After some maths this calculation could be reformulated as a triple
>> product:
>>
>> A_i = C*F(m)^-1*v
>>
>> Where C some sparse matrix.
>>
>> Once the A is computed I want to solve another problem:
>>
>> (A'*A)b=A'*r
>>
>> Which is a Gauss-Newton system now,
>> A'*A is truncated Hessian,
>> A'*r is gradient,
>> r = d - d_obs, where d_obs is the real observed data.
>> b -- model change which have to applied to original model m in order to
>> explain your observed data better
>>
>
> Excellent, great description.
>
>  It looks to me like you are assembling A because you want to be able to
> multiply by A' (which you can't do with finite differencing). But there is a
> very efficient way to apply A' that does not require assembling it:
> automatic differentiation. The adjoint costs about three times as much as a
> single forward solve. Depending on the nature of the nonlinearities in your
> problem, it may also be reasonably easy to write the adjoint without using
> AD.
>
>  You shouldn't need to assemble any of these matrices to solve the
> optimization/inverse problem. Assembling the matrices seriously limits
> scalability and there are optimal algorithms that don't use them. Depending
> on the problem, you may need a preconditioner for the reduced Hessian, but
> that can also be done without assembling it.
>
>  I imagine that F() is nonlinear. The adjoint of the nonlinear solve is
> the adjoint of the Jacobian at the converged state.
>
>  For more specifics on algorithms and pointers to the literature, you
> could look at this paper and some of the more specific applications it
> cites.
>
>  http://users.ices.utexas.edu/~omar/papers/siam-par-pde-opt.pdf
>
>
>


-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110616/3ec3602d/attachment.htm>


More information about the petsc-users mailing list