TS: high-order with Galerkin spatial discretization, semi-explicit index-1 DAE

Jed Brown jed at 59A2.org
Tue Mar 17 21:15:22 CDT 2009


I have a use for high-order implicit strong stability preserving
integrators and plan to write a TS implementation based on some known
optimal methods (cf. Gottlieb, Ketcheson, Shu 2009).  There are two
cases of interest that I don't think TS can currently handle.

Suppose we are using a Galerkin discretization so that the mass matrix
is not the identity, but we still want to use high-order temporal
discretization.  In particular, for high-order spatial discretizations,
lumping the mass matrix is usually not acceptable so it would have dense
inverse.  In these cases, MatShift is not sufficient [*].  One option is
to allow shifts by an assembled mass matrix, but this is awkward with
JFNK and requires significant storage when the mass matrix is not
lumped.

Perhaps a better option is for function evaluation and Jacobian assembly
to observe the time-step (or a_{kk}*dt for Runge-Kutta methods).  This
also works in the more general setting of semi-explicit index-1 DAE,
presented in the form

  V u_t = F(u,t)

where F is the spatial discretization and V is typically block diagonal
with mass matrices for differential variables and zero for algebraic
variables.  To integrate such systems, the user would provide functions
to evaluate

  G(u,u_i,t,dt) = V u - F(dt*u+u_i,t)

(u_i is a known inhomogeneous part) and assemble the
Jacobian/preconditioner G'(u,t,dt).  This is sufficient for multistep
and Diagonally Implicit Runge-Kutta (DIRK) methods.  For example, in the
case of DIRK, stage k is computed by solving

  G(u^k, u_0 + dt \sum_{l=0}^{k-1} a_{kl} u^l, t^k, dt a_{kk}) = 0

for u^k.  For singly implicit Runge-Kutta (SIRK, a_{kk} constant)
methods, lagging the preconditioner is perfectly acceptable since the
time-step does not change frequently.  For JFNK with non-SIRK methods
and with high-order multistep methods, the preconditioner would often
have to be refactored to accommodate the shift, hence the inability to
effectively lag the preconditioner is an unavoidable weakness.  Of
course it is common that assembly is more expensive than PC setup in
which case assembling F' and shifting by the mass matrix is appealing
(assuming that storage for an assembled mass matrix is available).

To summarize, asking the user to provide G,G' has the pros:
  + consistent mass matrices with minimal cost
  + DAE
  + moving meshes
and cons:
  - not compatible with current F,F'
  - shifts requiring reassembly are more expensive

Asking the user to observe the time-step is at odds with the current
interface where the user only provides F and F'.  Are the
performance/flexibility benefits worth it?  How about a middle ground
where the application can provide a lumped mass matrix (identity by
default) and either functions F,F' or G,G' (where if G' is provided and
the preconditioning matrix is different from the Jacobian, the
preconditioner would be shifted using the lumped mass matrix)?

I'd love to hear comments or suggestions.


Jed


[*] For my purposes, I don't actually need an assembled consistent mass
matrix because my Jacobian is available via MatShell or MFFD (i.e. since
shifting by the lumped mass matrix still produces a good preconditioner,
the generalization of MatShift would be sufficient).  However, some
people use the same matrix for the Jacobian and for preconditioning,
hence a general scheme probably ought to be able to handle this.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 197 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20090318/79ed771a/attachment.sig>


More information about the petsc-dev mailing list