I could not understand the post. We will have to go more slowly. To begin,<br>I cannot understand why you cannot use TSSetMatrices()<br><br> <a href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/TS/TSSetMatrices.html#TSSetMatrices">http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/TS/TSSetMatrices.html#TSSetMatrices</a><br>
<br> Thanks,<br><br> Matt<br><br><div class="gmail_quote">On Tue, Mar 17, 2009 at 9:15 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@59a2.org">jed@59a2.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
I have a use for high-order implicit strong stability preserving<br>
integrators and plan to write a TS implementation based on some known<br>
optimal methods (cf. Gottlieb, Ketcheson, Shu 2009). There are two<br>
cases of interest that I don't think TS can currently handle.<br>
<br>
Suppose we are using a Galerkin discretization so that the mass matrix<br>
is not the identity, but we still want to use high-order temporal<br>
discretization. In particular, for high-order spatial discretizations,<br>
lumping the mass matrix is usually not acceptable so it would have dense<br>
inverse. In these cases, MatShift is not sufficient [*]. One option is<br>
to allow shifts by an assembled mass matrix, but this is awkward with<br>
JFNK and requires significant storage when the mass matrix is not<br>
lumped.<br>
<br>
Perhaps a better option is for function evaluation and Jacobian assembly<br>
to observe the time-step (or a_{kk}*dt for Runge-Kutta methods). This<br>
also works in the more general setting of semi-explicit index-1 DAE,<br>
presented in the form<br>
<br>
V u_t = F(u,t)<br>
<br>
where F is the spatial discretization and V is typically block diagonal<br>
with mass matrices for differential variables and zero for algebraic<br>
variables. To integrate such systems, the user would provide functions<br>
to evaluate<br>
<br>
G(u,u_i,t,dt) = V u - F(dt*u+u_i,t)<br>
<br>
(u_i is a known inhomogeneous part) and assemble the<br>
Jacobian/preconditioner G'(u,t,dt). This is sufficient for multistep<br>
and Diagonally Implicit Runge-Kutta (DIRK) methods. For example, in the<br>
case of DIRK, stage k is computed by solving<br>
<br>
G(u^k, u_0 + dt \sum_{l=0}^{k-1} a_{kl} u^l, t^k, dt a_{kk}) = 0<br>
<br>
for u^k. For singly implicit Runge-Kutta (SIRK, a_{kk} constant)<br>
methods, lagging the preconditioner is perfectly acceptable since the<br>
time-step does not change frequently. For JFNK with non-SIRK methods<br>
and with high-order multistep methods, the preconditioner would often<br>
have to be refactored to accommodate the shift, hence the inability to<br>
effectively lag the preconditioner is an unavoidable weakness. Of<br>
course it is common that assembly is more expensive than PC setup in<br>
which case assembling F' and shifting by the mass matrix is appealing<br>
(assuming that storage for an assembled mass matrix is available).<br>
<br>
To summarize, asking the user to provide G,G' has the pros:<br>
+ consistent mass matrices with minimal cost<br>
+ DAE<br>
+ moving meshes<br>
and cons:<br>
- not compatible with current F,F'<br>
- shifts requiring reassembly are more expensive<br>
<br>
Asking the user to observe the time-step is at odds with the current<br>
interface where the user only provides F and F'. Are the<br>
performance/flexibility benefits worth it? How about a middle ground<br>
where the application can provide a lumped mass matrix (identity by<br>
default) and either functions F,F' or G,G' (where if G' is provided and<br>
the preconditioning matrix is different from the Jacobian, the<br>
preconditioner would be shifted using the lumped mass matrix)?<br>
<br>
I'd love to hear comments or suggestions.<br>
<br>
<br>
Jed<br>
<br>
<br>
[*] For my purposes, I don't actually need an assembled consistent mass<br>
matrix because my Jacobian is available via MatShell or MFFD (i.e. since<br>
shifting by the lumped mass matrix still produces a good preconditioner,<br>
the generalization of MatShift would be sufficient). However, some<br>
people use the same matrix for the Jacobian and for preconditioning,<br>
hence a general scheme probably ought to be able to handle this.<br>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>