<div dir="ltr">well, it has the same risk that Newton direction is not good due to the simplification. However, it worth my trying.</div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Oct 7, 2015 at 8:54 AM, Zou (Non-US), Ling <span dir="ltr"><<a href="mailto:ling.zou@inl.gov" target="_blank">ling.zou@inl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thank you Barry.<div><br></div><div>The background I am asking this question is that I want to reduce (or you can say optimize) the cost of my finite difference Jacobian evaluation, which is used for preconditioning purpose. The concept is based on my understanding of the problem I am solving, but I am not sure if it will work, thus I want to do some test.</div><div><br></div><div>Here is the concept, assume that my residual reads,</div><div><br></div><div><p style="margin:0px;font-size:12px;font-family:Monaco">F(\vec{U}) = F[\vec{U}, g(\vec{U})]</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">in which, g(\vec{U}) is a quite complicated and thus expensive function evaluation. This function, however, is not very sensitive to \vec{U}, i.e., \partial{g(\vec{U})}/\partial{g(\vec{U})} is not that important.</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">Normally, a finite difference Jacobian is evaluated as (as discussed in PETSc manual),</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">J(\vec{u}) \approx \frac{F(\vec{U}+\epsilon \vec{v}) - F(\vec{U})} {\epsilon}</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">In my case, it reads,</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">J(\vec{u}) \approx \frac{F[(\vec{U}+\epsilon \vec{v}), g(\vec{U}+\epsilon \vec{v})] - F[(\vec{U}), g(\vec{U})]} {\epsilon}</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">Because \partial{g(\vec{U})}/\partial{g(\vec{U})} is not important, the simplification I want to make is, when finite difference Jacobian (as preconditioner) is evaluated, it can be further simplified as,</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">J(\vec{u}) \approx \frac{F[(\vec{U}+\epsilon \vec{v}), g(\vec{U})] - F[(\vec{U}), g(\vec{U})]} {\epsilon}</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">Thus, the re-evaluation on g(\vec{U}+\epsilon \vec{v}) is removed. It seems to me that I need some kind of signal from PETSc so I can tell the code not to update g(\vec{U}). However, I never tested it and I don't know if anybody did similar things before.</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">Thanks,</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p><p style="margin:0px;font-size:12px;font-family:Monaco">Ling</p><p style="margin:0px;font-size:12px;font-family:Monaco"><br></p></div></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 6, 2015 at 7:09 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span><br>
> On Oct 6, 2015, at 4:22 PM, Zou (Non-US), Ling <<a href="mailto:ling.zou@inl.gov" target="_blank">ling.zou@inl.gov</a>> wrote:<br>
><br>
><br>
><br>
> On Tue, Oct 6, 2015 at 2:38 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> > On Oct 6, 2015, at 3:29 PM, Zou (Non-US), Ling <<a href="mailto:ling.zou@inl.gov" target="_blank">ling.zou@inl.gov</a>> wrote:<br>
> ><br>
> > Hi All,<br>
> ><br>
> > If the non-zero pattern of a finite difference Jacobian needs 20 colors to color it (20 comes from MatFDColoringView, the non-zero pattern is pre-determined from mesh connectivity), is it true that PETSc needs 40 functions evaluation to get the full Jacobian matrix filled? This is because that a perturbation per color needs two function evaluation according to PETSc manual (ver 3.6, page 123, equations shown in the middle of the page).<br>
> > But I only see 20 function evaluations. I probably have some misunderstanding somewhere. Any suggestions?<br>
><br>
> PETSc uses forward differencing to compute the derivatives, hence it needs a single function evaluation at the given point (which has almost always been previously computed in Newton's method)<br>
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~<br>
> Is it a potential problem if the user chooses to use a different (e.g. simplified) residual function as the function for MatFDColoringSetFunction?<br>
<br>
</span> Yes, you can do that. But this may result in a "Newton" direction that is not a descent direction hence Newton stalls. If you have 20 colors I doubt that it would be a good idea to use a cheaper function there. If you have several hundred colors then you can use a simpler function PLUS -snes_mf_operator to insure that the Newton direction is correct.<br>
<span><font color="#888888"><br>
<br>
Barry<br>
</font></span><div><div><br>
><br>
><br>
> and then one function evaluation for each color. This is why it reports 20 function evaluations.<br>
><br>
> Barry<br>
><br>
><br>
> ><br>
> > Ling<br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>