<div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><br></div>Thanks for your reply.<br><div><div class="gmail_quote"><div dir="ltr">Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> 于2018年12月27日周四 上午2:12写道:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><br>
If I understand correctly you able to provide the right hand side function but only the block diagonal part of the Jacobian (with one block per field). Is this correct? <br></blockquote><div>Yes, it is. <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
When you use the block diagonal part of the Jacobian as the preconditioner for the matrix free application of the Jacobian you get very slow convergence of KSP? <br></blockquote><div>Yes. When I use low order ILU(eg ILU2), KSP residual norm drop to a certain level and will oscillate. The log is provided below. The converged result will also provided below(mpcpthpboutput).<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
And you are trying to use the block diagonal part of the Jacobian as the operator A(x) for the Picard iteration? This will not in general work because Picard requires an accurate application of the operator A(), using just the block diagonal part of A() doesn't even provide you with a consistent nonlinear system to solve. <br></blockquote><div>So in my Picard program, I moved all the items that don't contain this physical field variable to the right side(for temperature field, all items that do not contain temperature ), which constitutes the right side b(x). <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
I think that your fundamental problem is that the block diagonal part of the Jacobian is simply a very poor approximation to the true Jacobian and hence any preconditioner built from it is a poor preconditioner. I think you need to bit the bullet and compute the full Jacobian including the terms that couple between the different fields, in this way you get a good approximation to the true Jacobian and can solve each KSP linear efficiently. How difficult would it be to compute these other terms in the Jacobian?<br></blockquote><div>I my opinion, the difficulty in constructing my Jacobian matrix is complex coefficient.(eg, thermal conductivity<span class="gmail-texhtml"><em> λ</em><span class="gmail-texhtml"></span></span> , density ) <br></div><div>For example, in temperature equation(T):</div><div> <span class="gmail-texhtml">∇(<em>λ</em><span class="gmail-texhtml">∇T</span>) - <span class="gmail-texhtml">∇(ρ<em> Cp</em> u ) + Q = 0</span></span></div><div><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><em>λ</em><span class="gmail-texhtml"> is thermal conductivity , <span class="gmail-texhtml"><span class="gmail-texhtml">ρ<em> is density Cp</em> is specific heat , u is velocity, Q is source.</span></span></span></span></span></span></div><div><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><em>λ = </em>1.9*(1.0e-3)*pow(T+273.0-150.0,1.29) function of T <br></span></span></span></div><div><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"></span></span></span><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml">ρ= (0.4814*P/1.0e3/(T+273.15))/(1.0+0.446*(1.0e-2)*P/1.0e3/(pow(T+273.15,1.2))) function of T and P</span></span></span></span></span></span></div><div><br><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"></span></span></span></span></span></span></span></span></span></span></span></span></div><div>In theory, the coefficient contain variable. So it is complicated to calculate the element of Jacobian.</div><div>In my snes_mf_operator method, I treat <span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml">λ,<span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml">ρ as constant. In every nonlinear step, I use the solution update the <span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml">λ,<span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml">ρ and thus update the preconditioning matrix. At each residual function call(in SNESFormFunction), I also update the coefficient to ensure the correction of the residual function.</span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></div><div><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"> <br></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></span></div><div><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"><span class="gmail-texhtml"></span></span></span></span></span></span></span></span></span></span></span></span></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<br>
Barry<br>
<br>
> On Dec 26, 2018, at 9:24 PM, Yingjie Wu <<a href="mailto:yjwu16@gmail.com" target="_blank">yjwu16@gmail.com</a>> wrote:<br>
> <br>
> Thank you very much for your previous reply.<br>
> I pay attention to Picard and NonlinearGS method because I use snes_mf method with preconditioning matrix to solve my problem. The quality of preconditioning matrix and initial values seriously affect the convergence of my problem. Because of the poor quality of the preconditioning matrix and inability to give a reasonable initial value for my problem, the convergence is not good. High-order ILU ( in my case ILU20) is needed to ensure that the linear step residual can be reduced to ksp_rtol 1e-2. In the future, the program may expand more complex and convergence may be worse. However, Picard and NonlinearGS are two method to ensure convergence. Although the convergence rate is very slow, the results of these two method can be used to provide a reasonable initial value for my program. So I need a solver similar to Picard or GS's iterative method, which is very useful for providing initial guess or comparing with the original program.<br>
> I tried Picard method, which split original residual function SNESComputeFunction F(x)=A(x)x - b(x) in to two part, give the right-hand part of the equation b(x) to SNESPicardComputeFunction,and then transfer the A(x) to SNESPicardComputeJacobian(in snes_mf , I use A(x) as preconditioning matrix) . Because A(x) only include the relations within a single physical field, it does not consider the relationship between physical fields. So it is equivalent to the diagonal block part of the Jacobian matrix J.<br>
> I really need some advice on my question.<br>
> <br>
> Thanks,<br>
> Yingjie<br>
> <br>
> Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> 于2018年12月26日周三 下午4:23写道:<br>
> <br>
> Try with -snes_mf_operator from the manual page of SNESSetPicard() this indicates that with Newton's method using A(x^{n}) to construct the preconditioner.<br>
> <br>
> > On Dec 26, 2018, at 7:48 AM, Yingjie Wu via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov" target="_blank">petsc-users@mcs.anl.gov</a>> wrote:<br>
> > <br>
> > Dear Petsc developers:<br>
> > Hi,<br>
> > 1. I tried to use the Picard solver in Petsc, but the program didn't converge. My program is still a thermal program that contains multiple physical fields, and is a PDEs' problem. The error message is as follows. The reason I use Picard is that it can guarantee convergence(though slow and expensive). I follow the ex15.c, but I don't use DM to organize the solution vector. So I try the SNESSetPicard().<br>
> > 0 SNES Function norm 2.91302e+08 <br>
> > 0 KSP Residual norm 5.79907e+08 <br>
> > 1 KSP Residual norm 1.46843e-05 <br>
> > Linear solve converged due to CONVERGED_RTOL iterations 1<br>
> > 1 SNES Function norm 2.891e+08 <br>
> > 0 KSP Residual norm 5.5989e+08 <br>
> > 1 KSP Residual norm 4.21314e-06 <br>
> > Linear solve converged due to CONVERGED_RTOL iterations 1<br>
> > 2 SNES Function norm 2.78289e+08 <br>
> > 0 KSP Residual norm 5.53553e+08 <br>
> > 1 KSP Residual norm 2.04076e-05 <br>
> > Linear solve converged due to CONVERGED_RTOL iterations 1<br>
> > 3 SNES Function norm 2.77833e+08 <br>
> > 0 KSP Residual norm 5.52907e+08 <br>
> > 1 KSP Residual norm 2.09919e-05 <br>
> > Linear solve converged due to CONVERGED_RTOL iterations 1<br>
> > 4 SNES Function norm 2.77821e+08 <br>
> > 0 KSP Residual norm 5.52708e+08 <br>
> > 1 KSP Residual norm 2.08677e-05 <br>
> > Linear solve converged due to CONVERGED_RTOL iterations 1<br>
> > Nonlinear solve did not converge due to DIVERGED_LINE_SEARCH iterations 4<br>
> > SNES Object: 1 MPI processes<br>
> > type: newtonls<br>
> > maximum iterations=50, maximum function evaluations=10000<br>
> > tolerances: relative=1e-08, absolute=1e-50, solution=1e-08<br>
> > total number of linear solver iterations=5<br>
> > total number of function evaluations=34<br>
> > norm schedule ALWAYS<br>
> > SNESLineSearch Object: 1 MPI processes<br>
> > type: bt<br>
> > interpolation: cubic<br>
> > alpha=1.000000e-04<br>
> > maxstep=1.000000e+08, minlambda=1.000000e-12<br>
> > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08<br>
> > maximum iterations=40<br>
> > KSP Object: 1 MPI processes<br>
> > type: gmres<br>
> > restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> > happy breakdown tolerance 1e-30<br>
> > maximum iterations=10000, initial guess is zero<br>
> > tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> > left preconditioning<br>
> > using PRECONDITIONED norm type for convergence test<br>
> > PC Object: 1 MPI processes<br>
> > type: lu<br>
> > out-of-place factorization<br>
> > tolerance for zero pivot 2.22045e-14<br>
> > matrix ordering: nd<br>
> > factor fill ratio given 5., needed 5.48356<br>
> > Factored matrix follows:<br>
> > Mat Object: 1 MPI processes<br>
> > type: seqaij<br>
> > rows=11368, cols=11368<br>
> > package used to perform factorization: petsc<br>
> > total: nonzeros=234554, allocated nonzeros=234554<br>
> > total number of mallocs used during MatSetValues calls =0<br>
> > not using I-node routines<br>
> > linear system matrix = precond matrix:<br>
> > Mat Object: 1 MPI processes<br>
> > type: seqaij<br>
> > rows=11368, cols=11368<br>
> > total: nonzeros=42774, allocated nonzeros=56840<br>
> > total number of mallocs used during MatSetValues calls =0<br>
> > not using I-node routines<br>
> > Are there any other examples of Picard methods? I'm very interested in this method.<br>
> > <br>
> > 2. I found that in ex15.c and ex19.c use the NonlinearGS. I know it's a iterative method. I don't know how to use this method in above examples. As for as I know, NonlinearGS is an iterative method parallel to subspace method. NonlinearGS should not be required if subspace methods are used. <br>
> > <br>
> > Thanks,<br>
> > Yingjie<br>
> > <br>
> > <br>
> <br>
<br>
</blockquote></div></div></div></div></div>