[petsc-dev] Proposed changes to TS API

Jed Brown jed at jedbrown.org
Fri May 11 11:34:00 CDT 2018


"Smith, Barry F." <bsmith at mcs.anl.gov> writes:

>> On May 11, 2018, at 10:17 AM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
>> 
>> 
>> 
>>> On May 11, 2018, at 7:05 AM, Matthew Knepley <knepley at gmail.com> wrote:
>>> 
>>> On Fri, May 11, 2018 at 12:25 AM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:
>>> 
>>> 
>>>> On May 10, 2018, at 4:12 PM, Jed Brown <jed at jedbrown.org> wrote:
>>>> 
>>>> "Zhang, Hong" <hongzhang at anl.gov> writes:
>>>> 
>>>>> Dear PETSc folks,
>>>>> 
>>>>> Current TS APIs (IFunction/IJacobian+RHSFunction/RHSJacobian) were designed for the fully implicit formulation F(t,U,Udot) = G(t,U).
>>>>> Shampine's paper (https://www.sciencedirect.com/science/article/pii/S0377042706004110?via%3Dihub<https://www.sciencedirect.com/science/article/pii/S0377042706004110?via=ihub>) explains some reasoning behind it.
>>>>> 
>>>>> Our formulation is general enough to cover all the following common cases
>>>>> 
>>>>> *   Udot = G(t,U) (classic ODE)
>>>>> *   M Udot = G(t,U)  (ODEs/DAEs for mechanical and electronic systems)
>>>>> *   M(t,U) Udot = G(t,U) (PDEs)
>>>>> 
>>>>> Yet the TS APIs provide the capability to solve both simple problems and complicated problems. However, we are not doing well to make TS easy to use and efficient especially for simple problems. Over the years, we have clearly seen the main drawbacks including:
>>>>> 1. The shift parameter exposed in IJacobian is terribly confusing, especially to new users. Also it is not conceptually straightforward when using AD or finite differences on IFunction to approximate IJacobian.
>>>> 
>>>> What isn't straightforward about AD or FD on the IFunction?  That one
>>>> bit of chain rule?
>>>> 
>>>>> 2. It is difficult to switch from fully implicit to fully explicit. Users cannot use explicit methods when they provide IFunction/IJacobian.
>>>> 
>>>> This is a real issue, but it's extremely common for PDE to have boundary
>>>> conditions enforced as algebraic constraints, thus yielding a DAE.
>>>> 
>>>>> 3. The structure of mass matrix is completely invisible to TS. This means giving up all the opportunities to improve efficiency. For example, when M is constant or weekly dependent on U, we might not want to evaluate/update it every time IJacobian is called. If M is diagonal, the Jacobian can be shifted more efficiently than just using MatAXPY().
>>>> 
>>>> I don't understand 
>>>> 
>>>>> 4. Reshifting the Jacobian is unnecessarily expensive and sometimes buggy.
>>>> 
>>>> Why is "reshifting" needed?  After a step is rejected and when the step
>>>> size changes for a linear constant operator?
>>>> 
>>>>> Consider the scenario below.
>>>>> shift = a;
>>>>> TSComputeIJacobian()
>>>>> shift = b;
>>>>> TSComputeIJacobian() // with the same U and Udot as last call
>>>>> Changing the shift parameter requires the Jacobian function to be evaluated again. If users provide only RHSJacobian, the Jacobian will not be updated/reshifted in the second call because TSComputeRHSJacobian() finds out that U has not been changed. This issue is fixable by adding more logic into the already convoluted implementation of TSComputeIJacobian(), but the intention here is to illustrate the cumbersomeness of current IJacobian and the growing complications in TSComputeIJacobian() that IJacobian causes.
>>>>> 
>>>>> So I propose that we have two separate matrices dF/dUdot and dF/dU, and remove the shift parameter from IJacobian. dF/dU will be calculated by IJacobian; dF/dUdot will be calculated by a new callback function and default to an identity matrix if it is not provided by users. Then the users do not need to assemble the shifted Jacobian since TS will handle the shifting whenever needed. And knowing the structure of dF/dUdot (the mass matrix), TS will become more flexible. In particular, we will have
>>>>> 
>>>>> *   easy access to the unshifted Jacobian dF/dU (this simplifies the adjoint implementation a lot),
>>>> 
>>>> How does this simplify the adjoint?
>>>> 
>>>>> *   plenty of opportunities to optimize TS when the mass matrix is diagonal or constant or weekly dependent on U (which accounts for almost all use cases in practice),
>>>> 
>>>> But longer critical path,
>>> 
>>>   What do you mean by longer critical path?
>>> 
>>>> more storage required, and more data motion.
>>> 
>>>  The extra storage needed is related to the size of the mass matrix correct? And the extra data motion is related to the size of the mass matrix correct?
>>> 
>>>   Is the extra work coming from a needed call to MatAXPY (to combine the scaled mass matrix with the Jacobian) in Hong's approach? While, in theory, the user can avoid the MatAXPY in the current code if they have custom code that assembles directly the scaled mass matrix and Jacobian? But surely most users would not write such custom code and would themselves keep a copy of the mass matrix (likely constant) and use MatAXPY() to combine the copy of the mass matrix with the Jacobian they compute at each timestep/stage?  Or am I missing something?
>>> 
>>> I assemble the combined system directly.
>> 
>>   How, two sets of calls to MatSetValues(), One for the scaled mass matrix and one for the Jacobian entries? For a constant mass matrix does this mean you are recomputing the mass matrix entries each call? Or are you storing the mass matrix entries somewhere?  Or is your mass matrix diagonal only?
>
>   Or do you build element by element the M*shift + J element stiffness and then insert it with a single MatSetValues() call?

It isn't even built separately at the element scale, just summed
contributions at quadrature points.


More information about the petsc-dev mailing list