[petsc-users] Is it still worth switching to PETSc if I can't write a Jacobian for my problem?
Hong Zhang
hongzhang at anl.gov
Thu Dec 10 15:55:18 CST 2015
Hi Brian,
If you can’t provide Jacobian and do not need to run the code in parallel, I do not think using PETSc would make too much difference compared to using scipy.integrate.odeint.
With both of them, you can either use explicit time integration methods with small time steps, or use implicit methods with internally approximated Jacobians (PETSc can compute a finite difference Jacobian for you).
However, scipy.integrate.odeint implements multistep methods, while PETSc provides a rich set of multistage methods (e.g. Runge-Kutta type methods) which may have better stability properties. An explicit integration method with better stability may allow for larger time steps in your problem, thus leading to possible performance gain. But the speedup may not be significant.
If you are using implicit methods with internally approximated Jacobians, high computational cost can be expected. In this case, providing a correct Jacobian might be the best way to improve the performance. And if you decide to do so, PETSc is definitely worth a try because it can help you check if the Jacobian is correct and give you diagnostic information about the convergence of the linear solves, the nonlinear solves and the time stepping.
Apart from that, you may also try the IMEX time integrators in PETSc. When using IMEX, you need to manually split your right hand side function into two parts — stiff and non-stiff parts, and only provide the Jacobian for the stiff part (usually linear). This requires the users know the source of stiffness in their problems. Some IMEX examples for advection-diffusion-rection problems and several simple ODEs can be found at http://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/ and http://www.mcs.anl.gov/petsc/petsc-current/src/ts/examples/tutorials/advection-diffusion-reaction/ex3.c.html
Hong
On Dec 10, 2015, at 1:51 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> Brian,
>
> I see two distinct issues here
>
> 1) being apply to apply your right hand side efficiently and
>
> 2) what type of ODE integrators, if any, can work well for your problem with its funky, possibly discontinuous right hand side?
>
> 1) Looking at the simplicity of your data structure and function evaluation I think you should just write your right hand side functions in C. The code would be no more complicated than it is now, from what you showed me. Even with "just in time" compilation likely you lose at least a factor of 10 by coding in python, maybe 100? I don't know. It looks to me like an easy translation of your current routines to C.
>
> 2) This is tricky. You a) cannot compute derivatives? and b) the function and derivatives may not be smooth?
>
> If the function was well behaved you could use finite differences to compute the Jacobian (reasonably efficiently, the cost is just some number of function evaluations) but with a flaky right hand side function the finite differences can produce garbage.
>
> You could use explicit schemes with a small enough timestep to satisfy any stability condition and forget about using implicit schemes. Then it becomes crucial to have a very fast right hand side function (since you will need many time steps). If you are lucky you can use something like a 4th order RK scheme (but again maybe with a non smooth right hand side may you can't).
>
> I am no expert. Perhaps Emil who is far more knowledgable about these things has questions?
>
> Barry
>
>
>
>> On Dec 10, 2015, at 12:52 PM, Brian Merchant <bhmerchant at gmail.com> wrote:
>>
>> Hi Barry,
>>
>> Here's some non-trivial example code: https://gist.github.com/bmer/2af429f88b0b696648a8
>>
>> I have still made some simplifications by removing some phase variables, expanding on variable names in general, and so on.
>>
>> The rhs function itself is defined on line 578. The functions referred to within it should be all defined above, so you can have a peek at them as necessary.
>>
>> Starting from line 634 I show how I use the rhs function. In particular, note the "disjointed" evaluation of the integral -- I don't just evaluate from 0 to t all at one go, but rather evaluate the integral in pieces (let's call the time spent between the end of one integral evaluation, and the start of the next integral evaluation a "pause"). This is so that if there were multiple amoebas, during the "pause", I can take into account changes in some of the parameters due to contact between one amoeba and another -- poor man's simplification.
>>
>> Please let me know if this is what you were looking for. I wouldn't be surprised if it wasn't, but instead would be happy to try to rework what I've got so it's more in line with what would be meaningful to you.
>>
>> Kind regards,
>> Brian
>>
>> On Wed, Dec 9, 2015 at 2:18 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>> I prefer the actual code, not the mathematics or the explanation
>>
>>> On Dec 9, 2015, at 3:42 PM, Brian Merchant <bhmerchant at gmail.com> wrote:
>>>
>>> Hi Barry,
>>>
>>>> Could send an example of your "rhs" function; not a totally trivial example
>>>
>>> Sure thing! Although, did you check out the exam I tried to build up in this stackexchange question, along with a picture: http://scicomp.stackexchange.com/questions/21501/is-it-worth-switching-to-timesteppers-provided-by-petsc-if-i-cant-write-down-a
>>>
>>> I ask because that's probably the best I can do without using as little math as possible.
>>>
>>> Otherwise, what I'll do is take a couple of days to carefully look at my work, and write up a non-trivial example of a difficult-to-differentiate RHS, that still is a simplification of the whole mess -- expect a one or two page PDF?
>>>
>>> Kind regards,
>>> Brian
>>>
>>> On Mon, Dec 7, 2015 at 12:45 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>>
>>> Brian,
>>>
>>> Could send an example of your "rhs" function; not a totally trivial example
>>>
>>> Barry
>>>
>>>> On Dec 7, 2015, at 11:21 AM, Brian Merchant <bhmerchant at gmail.com> wrote:
>>>>
>>>> Hi all,
>>>>
>>>> I am considering using petsc4py instead of scipy.integrate.odeint (which is a wrapper for Fortran solvers) for a problem involving the solution of a system of ODEs. The problem has the potential to be stiff. Writing down its Jacobian is very hard.
>>>>
>>>> So far, I have been able to produce reasonable speed gains by writing the RHS functions in "something like C" (using either numba or Cython). I'd like to get even more performance out, hence my consideration of PETSc.
>>>>
>>>> Due to the large number of equations involved, it is already tedious to think about writing down a Jacobian. Even worse though, is that some of the functions governing a particular interaction do not have neat analytical forms (let alone whether or not their derivatives have neat analytical forms), so we might have a mess of piecewise functions needed to approximate them if we were to go about still trying to produce a Jacobian...
>>>>
>>>> All the toy examples I see of PETSc time stepping problems have Jacobians defined, so I wonder if I would even get a speed gain going from switching to it, if perhaps one of the reasons why I have a high computational cost is due to not being able to provide a Jacobian function?
>>>>
>>>> I described the sort of problem I am working with in more detail in this scicomp.stackexchange question, which is where most of this question is copied from, except it also comes with a toy version of the problem I am dealing with: http://scicomp.stackexchange.com/questions/21501/is-it-worth-switching-to-timesteppers-provided-by-petsc-if-i-cant-write-down-a
>>>>
>>>> All your advice would be most helpful :)
>>>>
>>>> Kind regards,Brian
>>>>
>>>
>>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20151210/4b74ba17/attachment.html>
More information about the petsc-users
mailing list