[petsc-users] Using TS
Emil Constantinescu
emconsta at mcs.anl.gov
Tue Mar 15 10:12:22 CDT 2016
On 3/15/16 6:33 AM, Max la Cour Christensen wrote:
>
> Barry, Matt, Emil, thanks for the explanations! They are very useful.
>
> *Emil:* The application is the flow of oil and gas in the subsurface.
> The code uses a model of the rock in the subsurface and given a setup of
> wells, it computes a prediction for the production of oil and gas. It
> can also be used to model CO2 storage in the subsurface.
>
> I think we will attempt to switch to TS. We are working with different
> formulations of the equations. In the following, I have summarised how I
> think IFunction and IJacobian should be implemented for the formulation
> of the most interest to us. If you can bare with me and confirm this
> that would be great. The equations are:
>
> dm_o/dt = f_o(m,p)
> dm_g/dt = f_g(m,p)
> S_o(m,p) + S_g(m,p) = 1,
>
> where
>
> m_o is the mass of oil
> m_g is the mass of gas
> t is time,
> f(m_x,p) contains fluxes for mass transfer between adjacent cells and
> source terms for wells
> S_o is the oil saturation
> S_g is the gas saturation
> p is the pressure.
>
> In this formulation, the primary variables are m_o, m_g and p.
So then in PETSc, the u variable will be u=[m_o,m_g,p]^T and
u_t=[dm_o/dt, dm_g/dt, dp/dt]^T. See the RoberFunction in
http://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ex8.c.html
where you can think of
[m_o,m_g,p] = [x[0], x[1], x[2]] and
[dm_o/dt, dm_g/dt, dp/dt] = [xdot[0], xdot[1], xdot[2]].
> The above looks simpler than it is. The code to evaluate these things is
> around 90k lines of Fortran.
This is a good level of abstraction.
> If I understood correctly, IFunction and IJacobian would implement the
> following:
>
> *IFunction: *
>
> f_o(m,p) - dm_o/dt
> f_g(m,p) - dm_g/dt
> S_o(m,p) + S_g(m,p) - 1,
Yes, this looks fine.
> where dm_o/dt and dm_g/dt is given by the time derivative (u_t in PETSc)
> from IFunction()
>
> *IJacobian:*
>
> First 2 equations: \partial f(m,p) / \partial m - a and \partial f(m,p)
> / \partial p
> Last equation: \partial (S_o(m,p)+S_g(m,p)-1) / \partial m and \partial
> (S_o(m,p)+S_g(m,p)-1) / \partial p,
>
> where a comes from PETSc IJacobian().
>
> Is this correct usage?
Okay, so here you need the Jacobian matrix. This is matrix with rows
corresponding to "equations" and columns corresponding to "derivatives."
You have at least tow options here: store the Jacobian (see ex36.c in TS
for a simple DAE example: IFunction and IJacobian) or if it's too big,
you can compute its action on a vector.
In both cases you would need to do the same calculations but in the
former and you can think of the Jacobian matrix by blocks: in your case
you will have 3x3 blocks. The first row corresponding to your first
equation will be something like:
J[0][0] = \partial f(m,p) / \partial m_o + a*(-dm_o/dt)/dm_o
J[0][1] = \partial f(m,p) / \partial m_g + a*( dm_o/dt)/dm_g
J[0][2] = \partial f(m,p) / \partial p + a*( dm_o/dt)/dp
so you'd get something like
J[0][0] = \partial f(m,p) / \partial m_o - a (times identity)
J[0][1] = \partial f(m,p) / \partial m_g
J[0][2] = \partial f(m,p) / \partial p
and so on.
I think ex8 is really good at driving this point if there is any
ambiguity. Note that there are 3 problems defined in that file.
> *Emil: *Now for the adjoint implementation, we will need to provide a
> function for TSAdjointSetRHSJacobian, which computes the derivative of
> our system wrt. our control variables (control variables being the p in
> the PETSc manual) and some additional cost stuff: TSSetCostGradients()
> and possibly TSSetCostIntegrand(). Is this correct? Our cost function
> would typically be to maximize profit of an oil reservoir. In a simple
> way, this could be to maximize the number of oil barrels produced over
> the lifetime of the reservoir (e.g. 50 years). In the equations above,
> the number of oil barrels is contained in the source terms for the wells.
Yups, that sounds reasonable. Note that we have simple examples in PETSc
that might treat exactly this situation (including the optimization).
Although they are not documented yet [Hong promised yesterday to add
more details ;)] if you see exXX.c and then exXXadj.c exXXopt_ic.c
exXXopt_p.c they correspond to exXX.c with sensitivity, optimization for
initial conditions and optimization for parameters.
Hong is the main driver behind the adjoint implementation and can help
on much shorter notice than I can.
Hope this helps,
Emil
> Many thanks,
> Max
>
>
> On Mar 13, 2016, at 4:02 AM, Emil Constantinescu <emconsta at mcs.anl.gov
> <mailto:emconsta at mcs.anl.gov>>
> wrote:
>
>> On 3/12/16 8:37 PM, Matthew Knepley wrote:
>>> On Sat, Mar 12, 2016 at 8:34 PM, Emil Constantinescu
>>> <emconsta at mcs.anl.gov <mailto:emconsta at mcs.anl.gov>
>>> <mailto:emconsta at mcs.anl.gov>> wrote:
>>>
>>> I also find it useful to go through one of the simple examples
>>> available for TS:
>>> http://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/index.html
>>> (ex8 may be a good start).
>>>
>>> As Barry suggested, you need to implement IFunction and IJacobian.
>>> The argument "u" is S_o, S_w, and p stacked together and "u_t"
>>> their corresponding time derivatives. The rest is calculus, but
>>> following an example usually helps a lot in the beginning.
>>>
>>>
>>> Are you guys saying that IFunction and IJacobian are enough to do the
>>> adjoint system as well?
>>>
>>
>> Pretty much yes, but it depends on the cost function. This is the
>> beauty of discrete adjoints - if you have the Jacobian (transpose,
>> done internally through KSP) you're done. You need IJacobian for sure
>> to do the backward propagation. If you have that, the rest is usually
>> trivial. Mr. Hong Zhang (my postdoc) set up quite a few simple examples.
>>
>> Emil
>>
>>> Matt
>>>
>>> Out of curiosity, what is the application?
>>>
>>> Emil
>>>
>>>
>>> On 3/12/16 3:19 PM, Barry Smith wrote:
>>>
>>>
>>> This is only a starting point, Jed and Emil can fix my
>>> mistakes and provide additional details.
>>>
>>>
>>> In your case you will not provide a TSSetRHSFunction and
>>> TSSetRHSJacobian since everything should be treated implicitly
>>> as a DAE.
>>>
>>> First move everything in the three equations to the left
>>> side and then differentiate through the \partial/\partial t so
>>> that it only applies to the S_o, S_w, and p. For example for the
>>> first equation using the product rule twice you get something like
>>>
>>> \phi( p ) \rho_o( p ) \partial S_o/ \partial t + phi( p )
>>> S_o \partial \rho_o( p ) \partial t + \rho_o( p ) S_o
>>> \partial \phi( p ) \partial t - F_o = 0
>>>
>>> \phi( p ) \rho_o( p ) \partial S_o/ \partial t + phi( p )
>>> S_o \rho_o'(p) \partial p \partial t + \rho_o( p ) S_o \phi'( p
>>> ) \partial p \partial t - F_o = 0
>>>
>>> The two vector arguments to your IFunction are exactly the S_o,
>>> S_w, and p and \partial S_o/ \partial t , \partial S_w/
>>> \partial t, and \partial p/ \partial t so it is immediate to
>>> code up your IFunction once you have the analytic form above
>>>
>>> For the IJacobian and the "shift business" just remember that
>>> dF/dU means take the derivative of the IFunction with respect to
>>> S_o, S_w, and p treating the \partial S_o/ \partial t ,
>>> \partial S_w/ \partial t, and \partial p/ \partial t as if they
>>> were independent of S_o, S_w, and p. For the dF/dU_t that means
>>> taking the derivate with respect to the \partial S_o/ \partial t
>>> , \partial S_w/ \partial t, and \partial p/ \partial t
>>> treating the S_o, S_w, and p as independent of \partial S_o/
>>> \partial t , \partial S_w/ \partial t, and \partial p/
>>> \partial t. Then you just need to form the sum of the two
>>> parts with the a "shift" scaling dF/dU + a*dF/dU_t
>>>
>>> For the third equation everything is easy. dF/dS_o = 1 dF/dS_w =
>>> 1 dF/dp = 0 dF/d (\partial S_o)/\partial t = 0 (\partial
>>> S_w)/\partial t = 0 (\partial p)/\partial t = 0
>>>
>>> Computations for the first two equations are messy but
>>> straightforward. For example for the first equation dF/dS_o =
>>> phi( p ) \rho_o'(p) \partial p \partial t + \rho_o( p ) \phi'(
>>> p ) \partial p + dF_o/dS_o and dF/d (\partial S_o)/\partial t)
>>> = \phi( p ) \rho_o( p )
>>>
>>>
>>> Barry
>>>
>>> On Mar 12, 2016, at 12:04 PM, Matthew Knepley
>>> <knepley at gmail.com <mailto:knepley at gmail.com>
>>> <mailto:knepley at gmail.com>> wrote:
>>>
>>> On Sat, Mar 12, 2016 at 5:41 AM, Max la Cour Christensen
>>> <mlcch at dtu.dk <mailto:mlcch at dtu.dk> <mailto:mlcch at dtu.dk>>
>>> wrote:
>>>
>>> Hi guys,
>>>
>>> We are making preparations to implement adjoint based
>>> optimisation in our in-house oil and gas reservoir
>>> simulator. Currently our code uses PETSc's DMPlex, Vec, Mat,
>>> KSP and PC. We are still not using SNES and TS, but instead
>>> we have our own backward Euler and Newton-Raphson
>>> implementation. Due to the upcoming implementation of
>>> adjoints, we are considering changing the code and begin
>>> using TS and SNES.
>>>
>>> After examining the PETSc manual and examples, we are still
>>> not completely clear on how to apply TS to our system of
>>> PDEs. In a simplified formulation, it can be written as:
>>>
>>> \partial( \phi( p ) \rho_o( p ) S_o )/ \partial t = F_o(p,S)
>>> \partial( \phi( p ) \rho_w( p ) S_w )/ \partial t = F_w(p,S)
>>> S_o + S_w = 1,
>>>
>>> where p is the pressure,
>>> \phi( p ) is a porosity function depending on pressure,
>>> \rho_x( p ) is a density function depending on pressure,
>>> S_o is the saturation of oil,
>>> S_g is the saturation of gas,
>>> t is time,
>>> F_x(p,S) is a function containing fluxes and source terms.
>>> The primary variables are p, S_o and S_w.
>>>
>>> We are using a lowest order Finite Volume discretisation.
>>>
>>> Now for implementing this in TS (with the prospect of later
>>> using TSAdjoint), we are not sure if we need all of the
>>> functions: TSSetIFunction, TSSetRHSFunction, TSSetIJacobian
>>> and TSSetRHSJacobian and what parts of the equations go
>>> where. Especially we are unsure of how to use the concept of
>>> a shifted jacobian (TSSetIJacobian).
>>>
>>> Any advice you could provide will be highly appreciated.
>>>
>>> Barry and Emil,
>>>
>>> I am also interested in this, since I don't know how to do it.
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>> Many thanks,
>>> Max la Cour Christensen
>>> PhD student, Technical University of Denmark
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin
>>> their experiments is infinitely more interesting than any
>>> results to which their experiments lead.
>>> -- Norbert Wiener
>>>
>>>
>>>
>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which
>>> their experiments lead.
>>> -- Norbert Wiener
>
More information about the petsc-users
mailing list