[petsc-users] Strange behavior of TS after setting hand-coded Jacobian
Zhang, Hong
hongzhang at anl.gov
Mon Nov 8 22:59:23 CST 2021
You are essentially solving an index-2 DAE in the form:
dx/dt = f(x,y)
0 = g(x)
Note that the solvers you have tried, including 'ode15s' and 'ode23' in MATLAB and PETSc TS solvers, are all designed for ODEs or index-1 DAEs, and they cannot be used to solve index-2 DAEs directly. In practice, you need techniques to lower the index of DAEs first. A textbook approach to solve your problem is to differentiate the algebraic equation twice.
Taking d/dt for the second equation gives
dg/dx dx/dt = 0 ==> dg/dx f(x,y) = 0
Taking d/dt again gives
dg/dx (df/dx f + df/dy dy/dt) = 0
Assuming dg/dx df/dy is not singular, we get
dy/dt = -(dg/dx df/dy)^{-1} dg/dx (df/dx f)
Now you can solve an ODE system instead of the original DAE.
There may exist other approaches for solving this particular problem. But you need to look in the literature.
Hong (Mr.)
On Nov 7, 2021, at 2:45 AM, 仓宇 <yhcy1993 at gmail.com<mailto:yhcy1993 at gmail.com>> wrote:
Thanks for your patient explanation, and sorry for this late reply.
I do understand the manipulation skills that you mentioned, but this kind of manually substitution will not change the numerical difficulty of the problem.
In fact, before I turn to PETSc, I have tried to form an ODE for this problem already, by utilizing the well-established functions like 'ode15s' and 'ode23' in MATLAB. But it failed to produce physically reasonable results. The key flaw lies in the treatment of the eigenvalue, namely \hat{\Lambda}_r denoted in the PDF file. Only implicit DAE formulation can fully embody its effect. This is why I finally decided to turn to use the DAE functionality in PETSc.
I've excerpted some description from a monograph regarding this problem, hope it can clearly demonstrate the numerical requirement for solving it.
<bvp.png>
Kee, R. J., Coltrin, M. E., Glarborg, P., & Zhu, H. (2017). Stagnation Flows. In Chemically Reacting Flow (2 ed., pp. 232).
Zhang, Hong <hongzhang at anl.gov<mailto:hongzhang at anl.gov>> 于2021年11月1日周一 下午10:21写道:
You can avoid the zero-diagonal elements in your case. Just remove the boundary points and the \hat{\Lambda}_r from the solution vector Y (eq(17) in your PDE file). You can calculate them separately in your RHS function (when computing the residual vector for 2 <=i<=N-1). It is not necessary to form a DAE. You can actually form an ODE instead.
Hong (Mr.)
> On Oct 31, 2021, at 5:38 AM, 仓宇 <yhcy1993 at gmail.com<mailto:yhcy1993 at gmail.com>> wrote:
>
> Thanks for your suggestions and experience sharing.
>
> But it seems that you have missed my point. This kind of zero-diagonal
> element that I described earlier is somehow inevitable.
> If interested in this problem, you may refer to the PDF file that was
> attached in the previous post.
>
> Regards
>
> Yu
>
> Zhang, Hong <hongzhang at anl.gov<mailto:hongzhang at anl.gov>> 于2021年10月29日周五 下午10:05写道:
>>
>> One way to avoid the zero element in Jacobian is to exclude the boundary point from the solution vector. I often do this for Dirichlet boundary conditions since the value at the boundary is given directly and does not need to be taken as a degree of freedom.
>>
>> Hong (Mr.)
>>
>> On Oct 28, 2021, at 9:49 PM, 仓宇 <yhcy1993 at gmail.com<mailto:yhcy1993 at gmail.com>> wrote:
>>
>> Thanks for your careful inspection and thoughtful suggestions.
>>
>>> finite differencing may put a small non-zero value in that location due to numerical round-off
>>
>> I think your explanation is reasonable. This numerical round-off may somehow help to avoid this pivot issue.
>>
>> The structure of my jacobian matrix looks like this (generated by '-mat_view draw'):
>> <jac_view.png>
>> Analytically, the first diagonal element of the jacobian is indeed 0, as its corresponding residual function is solely determined from boundary condition of another variable. This seems a little bit wired but is mathematically well-posed. For more description about the background physics, please refer to attached PDF file for more detailed explanation on the discretization and boundary conditions.
>>
>> Actually, the jacobian matrix is not singular, but I do believe this numerical difficulty is caused by the zero-element in diagonal.
>> In this regard, I've performed some trial and test. It seems that several methods have been worked out for this pivot issue:
>> a) By setting '-pc_type svd', PETSC does not panic any more with my hand-coded jacobian, and converged solution is obtained. Efficiency is also preserved.
>> b) By setting '-pc_type none', converged solution is also obtained, but it takes too many KSP iterations to converge per SNES step (usually hundreds), making the overall solution procedure very slow.
>>
>> Do you think these methods really solved this kind of pivot issue? Not by chance like the numerical round-off in finite difference previously.
>>
>> Regards
>>
>> Yu Cang
>>
>> Barry Smith <bsmith at petsc.dev<mailto:bsmith at petsc.dev>> 于2021年10月27日周三 下午9:43写道:
>>>
>>>
>>> You can run with -ksp_error_if_not_converged to get it to stop as soon as a linear solve fails to help track down the exact breaking point.
>>>
>>>> The problem under consideration contains an eigen-value to be solved,
>>>> making the first diagonal element of the jacobian matrix being zero.
>>>> From these outputs, it seems that the PC failed to factorize, which is
>>>> caused by this 0 diagonal element. But I'm wondering why it works
>>>> with jacobian matrix generated by finite-difference?
>>>
>>> Presumably your "exact" Jacobian puts a zero on the diagonal while the finite differencing may put a small non-zero value in that location due to numerical round-off. In that case even if the factorization succeeds it may produce an inaccurate solution if the value on the diagonal is very small.
>>>
>>> If your matrix is singular or cannot be factored with LU then you need to use a different solver for the linear system that will be robust to the zero on the diagonal. What is the structure of your Jacobian? (The analytic form).
>>>
>>> Barry
>>>
>>>
>>>> On Oct 27, 2021, at 1:47 AM, 仓宇 <yhcy1993 at gmail.com<mailto:yhcy1993 at gmail.com>> wrote:
>>>>
>>>> Thanks for your kind reply.
>>>>
>>>> Several comparison tests have been performed. Attached are execution
>>>> output files. Below are corresponding descriptions.
>>>>
>>>> good.txt -- Run without hand-coded jacobian, solution converged, with
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>>>> -ksp_monitor_true_residual -ksp_converged_reason';
>>>> jac1.txt -- Run with hand-coded jacobian, does not converge, with
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian';
>>>> jac2.txt -- Run with hand-coded jacobian, does not converge, with
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian
>>>> -ksp_view';
>>>> jac3.txt -- Run with hand-coded jacobian, does not converge, with
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian
>>>> -ksp_view -ts_max_snes_failures -1 ';
>>>>
>>>> The problem under consideration contains an eigen-value to be solved,
>>>> making the first diagonal element of the jacobian matrix being zero.
>>>> From these outputs, it seems that the PC failed to factorize, which is
>>>> caused by this 0 diagonal element. But I'm wondering why it works
>>>> with jacobian matrix generated by finite-difference? Would employing
>>>> DMDA for discretization be helpful?
>>>>
>>>> Regards
>>>>
>>>> Yu Cang
>>>>
>>>> Barry Smith <bsmith at petsc.dev<mailto:bsmith at petsc.dev>> 于2021年10月25日周一 下午10:50写道:
>>>>>
>>>>>
>>>>> It is definitely unexpected that -snes_test_jacobian verifies the Jacobian as matching but the solve process is completely different.
>>>>>
>>>>> Please run with -snes_monitor -snes_converged_reason -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian and send all the output
>>>>>
>>>>> Barry
>>>>>
>>>>>
>>>>>> On Oct 25, 2021, at 9:53 AM, 仓宇 <yhcy1993 at gmail.com<mailto:yhcy1993 at gmail.com>> wrote:
>>>>>>
>>>>>> I'm using TS to solve a set of DAE, which originates from a
>>>>>> one-dimensional problem. The grid points are uniformly distributed.
>>>>>> For simplicity, the DMDA is not employed for discretization.
>>>>>>
>>>>>> At first, only the residual function is prescribed through
>>>>>> 'TSSetIFunction', and PETSC produces converged results. However, after
>>>>>> providing hand-coded Jacobian through 'TSSetIJacobian', the internal
>>>>>> SNES object fails (residual norm does not change), and TS reports
>>>>>> 'DIVERGED_STEP_REJECTED'.
>>>>>>
>>>>>> I have tried to add the option '-snes_test_jacobian' to see if the
>>>>>> hand-coded jacobian is somewhere wrong, but it shows '||J -
>>>>>> Jfd||_F/||J||_F = 1.07488e-10, ||J - Jfd||_F = 2.14458e-07',
>>>>>> indicating that the hand-coded jacobian is correct.
>>>>>>
>>>>>> Then, I added a monitor for the internal SNES object through
>>>>>> 'SNESMonitorSet', in which the solution vector will be displayed at
>>>>>> each iteration. It is interesting to find that, if the jacobian is not
>>>>>> provided, meaning finite-difference is utilized for jacobian
>>>>>> evaluation internally, the solution vector converges to steady
>>>>>> solution and the SNES residual norm is reduced continuously. However,
>>>>>> it turns out that, as long as the jacobian is provided, the solution
>>>>>> vector will NEVER get changed! So the solution procedure stucked!
>>>>>>
>>>>>> This is quite strange! Hope to get some advice.
>>>>>> PETSC version=3.14.6, program run in serial mode.
>>>>>>
>>>>>> Regards
>>>>>>
>>>>>> Yu Cang
>>>>>
>>>> <jac3.txt><jac2.txt><jac1.txt><good.txt>
>>>
>> <TFM.pdf>
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20211109/e19aefba/attachment-0001.html>
More information about the petsc-users
mailing list