<div dir="ltr">Thanks for your patient explanation, and sorry for this late reply.<br><br>I do understand the manipulation skills that you mentioned, but this kind of manually substitution will not change the numerical difficulty of the problem.<br><br>In fact, before I turn to PETSc, I have tried to form an ODE for this problem already, by utilizing the well-established functions like 'ode15s' and 'ode23' in MATLAB. But it failed to produce physically reasonable results. The key flaw lies in the treatment of the eigenvalue, namely \hat{\Lambda}_r denoted in the PDF file. Only implicit DAE formulation can fully embody its effect. This is why I finally decided to turn to use the DAE functionality in PETSc. <div><br></div><div>I've excerpted some description from a monograph regarding this problem, hope it can clearly demonstrate the numerical requirement for solving it.</div><div><img src="cid:ii_kvozbbap0" alt="bvp.png" width="562" height="247"><br></div><div><p style="margin:0px 0px 0px 36px;font-stretch:normal;font-size:12px;line-height:normal;font-family:Helvetica;color:rgb(0,0,0)">Kee, R. J., Coltrin, M. E., Glarborg, P., & Zhu, H. (2017). Stagnation Flows. In <i>Chemically Reacting Flow</i> (2 ed., pp. 232).</p></div><div><br></div></div><br><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Zhang, Hong <<a href="mailto:hongzhang@anl.gov">hongzhang@anl.gov</a>> 于2021年11月1日周一 下午10:21写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">You can avoid the zero-diagonal elements in your case. Just remove the boundary points and the \hat{\Lambda}_r from the solution vector Y (eq(17) in your PDE file). You can calculate them separately in your RHS function (when computing the residual vector for 2 <=i<=N-1). It is not necessary to form a DAE. You can actually form an ODE instead.<br>
<br>
Hong (Mr.)<br>
<br>
> On Oct 31, 2021, at 5:38 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank">yhcy1993@gmail.com</a>> wrote:<br>
> <br>
> Thanks for your suggestions and experience sharing.<br>
> <br>
> But it seems that you have missed my point. This kind of zero-diagonal<br>
> element that I described earlier is somehow inevitable.<br>
> If interested in this problem, you may refer to the PDF file that was<br>
> attached in the previous post.<br>
> <br>
> Regards<br>
> <br>
> Yu<br>
> <br>
> Zhang, Hong <<a href="mailto:hongzhang@anl.gov" target="_blank">hongzhang@anl.gov</a>> 于2021年10月29日周五 下午10:05写道:<br>
>> <br>
>> One way to avoid the zero element in Jacobian is to exclude the boundary point from the solution vector. I often do this for Dirichlet boundary conditions since the value at the boundary is given directly and does not need to be taken as a degree of freedom.<br>
>> <br>
>> Hong (Mr.)<br>
>> <br>
>> On Oct 28, 2021, at 9:49 PM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank">yhcy1993@gmail.com</a>> wrote:<br>
>> <br>
>> Thanks for your careful inspection and thoughtful suggestions.<br>
>> <br>
>>> finite differencing may put a small non-zero value in that location due to numerical round-off<br>
>> <br>
>> I think your explanation is reasonable. This numerical round-off may somehow help to avoid this pivot issue.<br>
>> <br>
>> The structure of my jacobian matrix looks like this (generated by '-mat_view draw'):<br>
>> <jac_view.png><br>
>> Analytically, the first diagonal element of the jacobian is indeed 0, as its corresponding residual function is solely determined from boundary condition of another variable. This seems a little bit wired but is mathematically well-posed. For more description about the background physics, please refer to attached PDF file for more detailed explanation on the discretization and boundary conditions.<br>
>> <br>
>> Actually, the jacobian matrix is not singular, but I do believe this numerical difficulty is caused by the zero-element in diagonal.<br>
>> In this regard, I've performed some trial and test. It seems that several methods have been worked out for this pivot issue:<br>
>> a) By setting '-pc_type svd', PETSC does not panic any more with my hand-coded jacobian, and converged solution is obtained. Efficiency is also preserved.<br>
>> b) By setting '-pc_type none', converged solution is also obtained, but it takes too many KSP iterations to converge per SNES step (usually hundreds), making the overall solution procedure very slow.<br>
>> <br>
>> Do you think these methods really solved this kind of pivot issue? Not by chance like the numerical round-off in finite difference previously.<br>
>> <br>
>> Regards<br>
>> <br>
>> Yu Cang<br>
>> <br>
>> Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> 于2021年10月27日周三 下午9:43写道:<br>
>>> <br>
>>> <br>
>>> You can run with -ksp_error_if_not_converged to get it to stop as soon as a linear solve fails to help track down the exact breaking point.<br>
>>> <br>
>>>> The problem under consideration contains an eigen-value to be solved,<br>
>>>> making the first diagonal element of the jacobian matrix being zero.<br>
>>>> From these outputs, it seems that the PC failed to factorize, which is<br>
>>>> caused by this 0 diagonal element. But I'm wondering why it works<br>
>>>> with jacobian matrix generated by finite-difference?<br>
>>> <br>
>>> Presumably your "exact" Jacobian puts a zero on the diagonal while the finite differencing may put a small non-zero value in that location due to numerical round-off. In that case even if the factorization succeeds it may produce an inaccurate solution if the value on the diagonal is very small.<br>
>>> <br>
>>> If your matrix is singular or cannot be factored with LU then you need to use a different solver for the linear system that will be robust to the zero on the diagonal. What is the structure of your Jacobian? (The analytic form).<br>
>>> <br>
>>> Barry<br>
>>> <br>
>>> <br>
>>>> On Oct 27, 2021, at 1:47 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank">yhcy1993@gmail.com</a>> wrote:<br>
>>>> <br>
>>>> Thanks for your kind reply.<br>
>>>> <br>
>>>> Several comparison tests have been performed. Attached are execution<br>
>>>> output files. Below are corresponding descriptions.<br>
>>>> <br>
>>>> good.txt -- Run without hand-coded jacobian, solution converged, with<br>
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br>
>>>> -ksp_monitor_true_residual -ksp_converged_reason';<br>
>>>> jac1.txt -- Run with hand-coded jacobian, does not converge, with<br>
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br>
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian';<br>
>>>> jac2.txt -- Run with hand-coded jacobian, does not converge, with<br>
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br>
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian<br>
>>>> -ksp_view';<br>
>>>> jac3.txt -- Run with hand-coded jacobian, does not converge, with<br>
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br>
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian<br>
>>>> -ksp_view -ts_max_snes_failures -1 ';<br>
>>>> <br>
>>>> The problem under consideration contains an eigen-value to be solved,<br>
>>>> making the first diagonal element of the jacobian matrix being zero.<br>
>>>> From these outputs, it seems that the PC failed to factorize, which is<br>
>>>> caused by this 0 diagonal element. But I'm wondering why it works<br>
>>>> with jacobian matrix generated by finite-difference? Would employing<br>
>>>> DMDA for discretization be helpful?<br>
>>>> <br>
>>>> Regards<br>
>>>> <br>
>>>> Yu Cang<br>
>>>> <br>
>>>> Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank">bsmith@petsc.dev</a>> 于2021年10月25日周一 下午10:50写道:<br>
>>>>> <br>
>>>>> <br>
>>>>> It is definitely unexpected that -snes_test_jacobian verifies the Jacobian as matching but the solve process is completely different.<br>
>>>>> <br>
>>>>> Please run with -snes_monitor -snes_converged_reason -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian and send all the output<br>
>>>>> <br>
>>>>> Barry<br>
>>>>> <br>
>>>>> <br>
>>>>>> On Oct 25, 2021, at 9:53 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank">yhcy1993@gmail.com</a>> wrote:<br>
>>>>>> <br>
>>>>>> I'm using TS to solve a set of DAE, which originates from a<br>
>>>>>> one-dimensional problem. The grid points are uniformly distributed.<br>
>>>>>> For simplicity, the DMDA is not employed for discretization.<br>
>>>>>> <br>
>>>>>> At first, only the residual function is prescribed through<br>
>>>>>> 'TSSetIFunction', and PETSC produces converged results. However, after<br>
>>>>>> providing hand-coded Jacobian through 'TSSetIJacobian', the internal<br>
>>>>>> SNES object fails (residual norm does not change), and TS reports<br>
>>>>>> 'DIVERGED_STEP_REJECTED'.<br>
>>>>>> <br>
>>>>>> I have tried to add the option '-snes_test_jacobian' to see if the<br>
>>>>>> hand-coded jacobian is somewhere wrong, but it shows '||J -<br>
>>>>>> Jfd||_F/||J||_F = 1.07488e-10, ||J - Jfd||_F = 2.14458e-07',<br>
>>>>>> indicating that the hand-coded jacobian is correct.<br>
>>>>>> <br>
>>>>>> Then, I added a monitor for the internal SNES object through<br>
>>>>>> 'SNESMonitorSet', in which the solution vector will be displayed at<br>
>>>>>> each iteration. It is interesting to find that, if the jacobian is not<br>
>>>>>> provided, meaning finite-difference is utilized for jacobian<br>
>>>>>> evaluation internally, the solution vector converges to steady<br>
>>>>>> solution and the SNES residual norm is reduced continuously. However,<br>
>>>>>> it turns out that, as long as the jacobian is provided, the solution<br>
>>>>>> vector will NEVER get changed! So the solution procedure stucked!<br>
>>>>>> <br>
>>>>>> This is quite strange! Hope to get some advice.<br>
>>>>>> PETSC version=3.14.6, program run in serial mode.<br>
>>>>>> <br>
>>>>>> Regards<br>
>>>>>> <br>
>>>>>> Yu Cang<br>
>>>>> <br>
>>>> <jac3.txt><jac2.txt><jac1.txt><good.txt><br>
>>> <br>
>> <TFM.pdf><br>
>> <br>
>> <br>
<br>
</blockquote></div>