[petsc-users] Getting access to matrix rows without and setting values simultaneously

Jed Brown jed at 59A2.org
Thu Jun 16 09:38:58 CDT 2011


On Thu, Jun 16, 2011 at 15:43, Alexander Grayver <agrayver at gfz-potsdam.de>wrote:

> Let's say we have a discretized model of some physical parameter m (say
> acoustic velocity). Number of model parameters is N. We need to take M
> measurements d within model (say time traveling of acoustic wave) based on M
> different sets of receiver/source positions.
> In our case N >> M (e.g. 10^7 >> 10^3).
> We have operator F (which might linear or not) that defines relationship
> between m and d:
> d=F(m)
>
> The operator F is just a system of equations actually. I have no problem
> now to solve it for any m.
>
> What I need now is to compute the variation of this operator:
>
> A = \frac{\partial d_i}{\partial m_j}, for all i=1..M, j=1..N
>
> After some maths this calculation could be reformulated as a triple
> product:
>
> A_i = C*F(m)^-1*v
>
> Where C some sparse matrix.
>
> Once the A is computed I want to solve another problem:
>
> (A'*A)b=A'*r
>
> Which is a Gauss-Newton system now,
> A'*A is truncated Hessian,
> A'*r is gradient,
> r = d - d_obs, where d_obs is the real observed data.
> b -- model change which have to applied to original model m in order to
> explain your observed data better
>

Excellent, great description.

It looks to me like you are assembling A because you want to be able to
multiply by A' (which you can't do with finite differencing). But there is a
very efficient way to apply A' that does not require assembling it:
automatic differentiation. The adjoint costs about three times as much as a
single forward solve. Depending on the nature of the nonlinearities in your
problem, it may also be reasonably easy to write the adjoint without using
AD.

You shouldn't need to assemble any of these matrices to solve the
optimization/inverse problem. Assembling the matrices seriously limits
scalability and there are optimal algorithms that don't use them. Depending
on the problem, you may need a preconditioner for the reduced Hessian, but
that can also be done without assembling it.

I imagine that F() is nonlinear. The adjoint of the nonlinear solve is the
adjoint of the Jacobian at the converged state.

For more specifics on algorithms and pointers to the literature, you could
look at this paper and some of the more specific applications it cites.

http://users.ices.utexas.edu/~omar/papers/siam-par-pde-opt.pdf
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110616/40e6c34c/attachment.htm>


More information about the petsc-users mailing list