<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class=""><br class=""></div> I think you can use the following approach. Instead of making the boundary conditions, for example,<div class=""><br class=""></div><div class=""> V_1(t) = 0 </div><div class=""><br class=""></div><div class=""> you do the equivalent,</div><div class=""><br class=""></div><div class=""> v_1(t=0) = 0 and </div><div class=""><br class=""></div><div class=""> d V_1(t) </div><div class=""> ---------- = 0 </div><div class=""> dt</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""> Now if you do backward Euler on this you get </div><div class=""><br class=""></div><div class=""> V_1^{n+1} - V_1^{n} </div><div class=""> -------------------------- = 0</div><div class=""> Dt </div><div class=""><br class=""></div><div class=""> or in your residual form</div><div class=""><br class=""></div><div class=""> V_1^{n+1} - V_1^{n} </div><div class=""> R_{V_1} = -------------------------- </div><div class=""> Dt </div><div class=""><br class=""></div><div class=""> The Jacobian of this gives a 1/Dt on the diagonal for this row. (Similarly for the other problematic rows).</div><div class=""><br class=""></div><div class=""> Now your Jacobian is nonsingular and you can solve with it with no difficulties.</div><div class=""><br class=""></div><div class=""> Note that you get the same result if you think of the problem as a DAE because the residual equation is R_{V_1} = V_1^{n+1} and the derivative of this with respect to V_1^{n+1} is 1 (not zero). So you still end up with a nonzero on the diagonal, not zero. Regardless, you can keep these rows in the ODE form right hand side and form Jacobian, or you can eliminate them (analytically as Hong suggested) so the ODE solver does not see these rows but in either case the results are identical.</div><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""></div><div class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Nov 7, 2021, at 3:45 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" class="">yhcy1993@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Thanks for your patient explanation, and sorry for this late reply.<br class=""><br class="">I do understand the manipulation skills that you mentioned, but this kind of manually substitution will not change the numerical difficulty of the problem.<br class=""><br class="">In fact, before I turn to PETSc, I have tried to form an ODE for this problem already, by utilizing the well-established functions like 'ode15s' and 'ode23' in MATLAB. But it failed to produce physically reasonable results. The key flaw lies in the treatment of the eigenvalue, namely \hat{\Lambda}_r denoted in the PDF file. Only implicit DAE formulation can fully embody its effect. This is why I finally decided to turn to use the DAE functionality in PETSc. <div class=""><br class=""></div><div class="">I've excerpted some description from a monograph regarding this problem, hope it can clearly demonstrate the numerical requirement for solving it.</div><div class=""><span id="cid:ii_kvozbbap0"><bvp.png></span><br class=""></div><div class=""><div style="margin: 0px 0px 0px 36px; font-stretch: normal; font-size: 12px; line-height: normal; font-family: Helvetica;" class="">Kee, R. J., Coltrin, M. E., Glarborg, P., & Zhu, H. (2017). Stagnation Flows. In <i class="">Chemically Reacting Flow</i> (2 ed., pp. 232).</div></div><div class=""><br class=""></div></div><br class=""><br class=""><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Zhang, Hong <<a href="mailto:hongzhang@anl.gov" class="">hongzhang@anl.gov</a>> 于2021年11月1日周一 下午10:21写道:<br class=""></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">You can avoid the zero-diagonal elements in your case. Just remove the boundary points and the \hat{\Lambda}_r from the solution vector Y (eq(17) in your PDE file). You can calculate them separately in your RHS function (when computing the residual vector for 2 <=i<=N-1). It is not necessary to form a DAE. You can actually form an ODE instead.<br class="">
<br class="">
Hong (Mr.)<br class="">
<br class="">
> On Oct 31, 2021, at 5:38 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank" class="">yhcy1993@gmail.com</a>> wrote:<br class="">
> <br class="">
> Thanks for your suggestions and experience sharing.<br class="">
> <br class="">
> But it seems that you have missed my point. This kind of zero-diagonal<br class="">
> element that I described earlier is somehow inevitable.<br class="">
> If interested in this problem, you may refer to the PDF file that was<br class="">
> attached in the previous post.<br class="">
> <br class="">
> Regards<br class="">
> <br class="">
> Yu<br class="">
> <br class="">
> Zhang, Hong <<a href="mailto:hongzhang@anl.gov" target="_blank" class="">hongzhang@anl.gov</a>> 于2021年10月29日周五 下午10:05写道:<br class="">
>> <br class="">
>> One way to avoid the zero element in Jacobian is to exclude the boundary point from the solution vector. I often do this for Dirichlet boundary conditions since the value at the boundary is given directly and does not need to be taken as a degree of freedom.<br class="">
>> <br class="">
>> Hong (Mr.)<br class="">
>> <br class="">
>> On Oct 28, 2021, at 9:49 PM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank" class="">yhcy1993@gmail.com</a>> wrote:<br class="">
>> <br class="">
>> Thanks for your careful inspection and thoughtful suggestions.<br class="">
>> <br class="">
>>> finite differencing may put a small non-zero value in that location due to numerical round-off<br class="">
>> <br class="">
>> I think your explanation is reasonable. This numerical round-off may somehow help to avoid this pivot issue.<br class="">
>> <br class="">
>> The structure of my jacobian matrix looks like this (generated by '-mat_view draw'):<br class="">
>> <jac_view.png><br class="">
>> Analytically, the first diagonal element of the jacobian is indeed 0, as its corresponding residual function is solely determined from boundary condition of another variable. This seems a little bit wired but is mathematically well-posed. For more description about the background physics, please refer to attached PDF file for more detailed explanation on the discretization and boundary conditions.<br class="">
>> <br class="">
>> Actually, the jacobian matrix is not singular, but I do believe this numerical difficulty is caused by the zero-element in diagonal.<br class="">
>> In this regard, I've performed some trial and test. It seems that several methods have been worked out for this pivot issue:<br class="">
>> a) By setting '-pc_type svd', PETSC does not panic any more with my hand-coded jacobian, and converged solution is obtained. Efficiency is also preserved.<br class="">
>> b) By setting '-pc_type none', converged solution is also obtained, but it takes too many KSP iterations to converge per SNES step (usually hundreds), making the overall solution procedure very slow.<br class="">
>> <br class="">
>> Do you think these methods really solved this kind of pivot issue? Not by chance like the numerical round-off in finite difference previously.<br class="">
>> <br class="">
>> Regards<br class="">
>> <br class="">
>> Yu Cang<br class="">
>> <br class="">
>> Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank" class="">bsmith@petsc.dev</a>> 于2021年10月27日周三 下午9:43写道:<br class="">
>>> <br class="">
>>> <br class="">
>>> You can run with -ksp_error_if_not_converged to get it to stop as soon as a linear solve fails to help track down the exact breaking point.<br class="">
>>> <br class="">
>>>> The problem under consideration contains an eigen-value to be solved,<br class="">
>>>> making the first diagonal element of the jacobian matrix being zero.<br class="">
>>>> From these outputs, it seems that the PC failed to factorize, which is<br class="">
>>>> caused by this 0 diagonal element. But I'm wondering why it works<br class="">
>>>> with jacobian matrix generated by finite-difference?<br class="">
>>> <br class="">
>>> Presumably your "exact" Jacobian puts a zero on the diagonal while the finite differencing may put a small non-zero value in that location due to numerical round-off. In that case even if the factorization succeeds it may produce an inaccurate solution if the value on the diagonal is very small.<br class="">
>>> <br class="">
>>> If your matrix is singular or cannot be factored with LU then you need to use a different solver for the linear system that will be robust to the zero on the diagonal. What is the structure of your Jacobian? (The analytic form).<br class="">
>>> <br class="">
>>> Barry<br class="">
>>> <br class="">
>>> <br class="">
>>>> On Oct 27, 2021, at 1:47 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank" class="">yhcy1993@gmail.com</a>> wrote:<br class="">
>>>> <br class="">
>>>> Thanks for your kind reply.<br class="">
>>>> <br class="">
>>>> Several comparison tests have been performed. Attached are execution<br class="">
>>>> output files. Below are corresponding descriptions.<br class="">
>>>> <br class="">
>>>> good.txt -- Run without hand-coded jacobian, solution converged, with<br class="">
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br class="">
>>>> -ksp_monitor_true_residual -ksp_converged_reason';<br class="">
>>>> jac1.txt -- Run with hand-coded jacobian, does not converge, with<br class="">
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br class="">
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian';<br class="">
>>>> jac2.txt -- Run with hand-coded jacobian, does not converge, with<br class="">
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br class="">
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian<br class="">
>>>> -ksp_view';<br class="">
>>>> jac3.txt -- Run with hand-coded jacobian, does not converge, with<br class="">
>>>> option '-ts_monitor -snes_monitor -snes_converged_reason<br class="">
>>>> -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian<br class="">
>>>> -ksp_view -ts_max_snes_failures -1 ';<br class="">
>>>> <br class="">
>>>> The problem under consideration contains an eigen-value to be solved,<br class="">
>>>> making the first diagonal element of the jacobian matrix being zero.<br class="">
>>>> From these outputs, it seems that the PC failed to factorize, which is<br class="">
>>>> caused by this 0 diagonal element. But I'm wondering why it works<br class="">
>>>> with jacobian matrix generated by finite-difference? Would employing<br class="">
>>>> DMDA for discretization be helpful?<br class="">
>>>> <br class="">
>>>> Regards<br class="">
>>>> <br class="">
>>>> Yu Cang<br class="">
>>>> <br class="">
>>>> Barry Smith <<a href="mailto:bsmith@petsc.dev" target="_blank" class="">bsmith@petsc.dev</a>> 于2021年10月25日周一 下午10:50写道:<br class="">
>>>>> <br class="">
>>>>> <br class="">
>>>>> It is definitely unexpected that -snes_test_jacobian verifies the Jacobian as matching but the solve process is completely different.<br class="">
>>>>> <br class="">
>>>>> Please run with -snes_monitor -snes_converged_reason -ksp_monitor_true_residual -ksp_converged_reason -snes_test_jacobian and send all the output<br class="">
>>>>> <br class="">
>>>>> Barry<br class="">
>>>>> <br class="">
>>>>> <br class="">
>>>>>> On Oct 25, 2021, at 9:53 AM, 仓宇 <<a href="mailto:yhcy1993@gmail.com" target="_blank" class="">yhcy1993@gmail.com</a>> wrote:<br class="">
>>>>>> <br class="">
>>>>>> I'm using TS to solve a set of DAE, which originates from a<br class="">
>>>>>> one-dimensional problem. The grid points are uniformly distributed.<br class="">
>>>>>> For simplicity, the DMDA is not employed for discretization.<br class="">
>>>>>> <br class="">
>>>>>> At first, only the residual function is prescribed through<br class="">
>>>>>> 'TSSetIFunction', and PETSC produces converged results. However, after<br class="">
>>>>>> providing hand-coded Jacobian through 'TSSetIJacobian', the internal<br class="">
>>>>>> SNES object fails (residual norm does not change), and TS reports<br class="">
>>>>>> 'DIVERGED_STEP_REJECTED'.<br class="">
>>>>>> <br class="">
>>>>>> I have tried to add the option '-snes_test_jacobian' to see if the<br class="">
>>>>>> hand-coded jacobian is somewhere wrong, but it shows '||J -<br class="">
>>>>>> Jfd||_F/||J||_F = 1.07488e-10, ||J - Jfd||_F = 2.14458e-07',<br class="">
>>>>>> indicating that the hand-coded jacobian is correct.<br class="">
>>>>>> <br class="">
>>>>>> Then, I added a monitor for the internal SNES object through<br class="">
>>>>>> 'SNESMonitorSet', in which the solution vector will be displayed at<br class="">
>>>>>> each iteration. It is interesting to find that, if the jacobian is not<br class="">
>>>>>> provided, meaning finite-difference is utilized for jacobian<br class="">
>>>>>> evaluation internally, the solution vector converges to steady<br class="">
>>>>>> solution and the SNES residual norm is reduced continuously. However,<br class="">
>>>>>> it turns out that, as long as the jacobian is provided, the solution<br class="">
>>>>>> vector will NEVER get changed! So the solution procedure stucked!<br class="">
>>>>>> <br class="">
>>>>>> This is quite strange! Hope to get some advice.<br class="">
>>>>>> PETSC version=3.14.6, program run in serial mode.<br class="">
>>>>>> <br class="">
>>>>>> Regards<br class="">
>>>>>> <br class="">
>>>>>> Yu Cang<br class="">
>>>>> <br class="">
>>>> <jac3.txt><jac2.txt><jac1.txt><good.txt><br class="">
>>> <br class="">
>> <TFM.pdf><br class="">
>> <br class="">
>> <br class="">
<br class="">
</blockquote></div>
</div></blockquote></div><br class=""></div></body></html>