<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Mar 22, 2016 at 9:23 AM, Norihiro Watanabe <span dir="ltr"><<a href="mailto:norihiro.w@gmail.com" target="_blank">norihiro.w@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Can't I use KSPGetResidualNorm()? I mean if I'm interested only in the<br>
last residual.<br></blockquote><div><br></div><div>Yes, definitely.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
On Tue, Mar 22, 2016 at 3:20 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br>
> On Tue, Mar 22, 2016 at 9:08 AM, Norihiro Watanabe <<a href="mailto:norihiro.w@gmail.com">norihiro.w@gmail.com</a>><br>
> wrote:<br>
>><br>
>> Unfortunately -ksp_converged_reason prints the number of iterations<br>
>> but no information about final errors.<br>
><br>
><br>
> If you want the actual residuals (not errrors), you could use<br>
><br>
><br>
> <a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPGetResidualHistory.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPGetResidualHistory.html</a><br>
><br>
> Matt<br>
><br>
>><br>
>> Thanks,<br>
>> Nori<br>
>><br>
>> On Tue, Mar 22, 2016 at 3:06 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>
>> wrote:<br>
>> > On Tue, Mar 22, 2016 at 9:02 AM, Norihiro Watanabe<br>
>> > <<a href="mailto:norihiro.w@gmail.com">norihiro.w@gmail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> What I wanted to do is displaying final converged errors without using<br>
>> >> -ksp_monitor. Because my problem includes a lot of time steps and<br>
>> >> nonlinear iterations, log output from -ksp_monitor for each linear<br>
>> >> solve is sometimes too much. But you are right. It doesn't make sense<br>
>> >> to call the expensive function just for the log output.<br>
>> ><br>
>> ><br>
>> > Maybe something like -ksp_converged_reason?<br>
>> ><br>
>> > Thanks,<br>
>> ><br>
>> > Matt<br>
>> ><br>
>> >><br>
>> >> Thanks,<br>
>> >> Nori<br>
>> >><br>
>> >> On Tue, Mar 22, 2016 at 2:45 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > On Tue, Mar 22, 2016 at 8:25 AM, Norihiro Watanabe<br>
>> >> > <<a href="mailto:norihiro.w@gmail.com">norihiro.w@gmail.com</a>><br>
>> >> > wrote:<br>
>> >> >><br>
>> >> >> Thank you Matt!<br>
>> >> >><br>
>> >> >> Actually I don't want to change a norm type used in a convergence<br>
>> >> >> check. I just want to output a relative error which PETSc actually<br>
>> >> >> used for a convergence check (for log output in my program without<br>
>> >> >> -ksp_*) and thought I need to have a norm of a preconditioned RHS to<br>
>> >> >> compute it by myself. Or is there any function available in PETSc<br>
>> >> >> which returns the relative error or the tolerance multiplied by the<br>
>> >> >> norm of a preconditioned RHS? I couldn't find it.<br>
>> >> ><br>
>> >> ><br>
>> >> > If you want the action of the preconditioner, you can pull it out<br>
>> >> ><br>
>> >> > KSPGetPC()<br>
>> >> ><br>
>> >> > and apply it<br>
>> >> ><br>
>> >> > PCApply()<br>
>> >> ><br>
>> >> > but I still do not understand why you want this. Do you want to check<br>
>> >> > the<br>
>> >> > norms<br>
>> >> > yourself? The PCApply() could be expensive to calculate again.<br>
>> >> ><br>
>> >> > Thanks,<br>
>> >> ><br>
>> >> > Matt<br>
>> >> ><br>
>> >> >><br>
>> >> >><br>
>> >> >> Best,<br>
>> >> >> Nori<br>
>> >> >><br>
>> >> >><br>
>> >> >> On Tue, Mar 22, 2016 at 1:51 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>><br>
>> >> >> wrote:<br>
>> >> >> > On Tue, Mar 22, 2016 at 2:50 AM, Norihiro Watanabe<br>
>> >> >> > <<a href="mailto:norihiro.w@gmail.com">norihiro.w@gmail.com</a>><br>
>> >> >> > wrote:<br>
>> >> >> >><br>
>> >> >> >> Hi,<br>
>> >> >> >><br>
>> >> >> >> Is it correct that a norm of a preconditioned RHS vector is used<br>
>> >> >> >> to<br>
>> >> >> >> compute a relative error in BCGS?<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > Yes, but you can verify this using -ksp_view<br>
>> >> >> ><br>
>> >> >> >><br>
>> >> >> >> I'm testing BCGS + BoomerAMG. With "-info", PETSc says "initial<br>
>> >> >> >> right<br>
>> >> >> >> hand side norm" is 2.223619476717e+10 (see below) but an actual<br>
>> >> >> >> norm<br>
>> >> >> >> of the RHS I passed is 4.059007e-02. If yes, is there any way to<br>
>> >> >> >> get<br>
>> >> >> >> a<br>
>> >> >> >> norm of a preconditioned RHS?<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > Do you mean unpreconditioned? You can try<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > <a href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetNormType.html" rel="noreferrer" target="_blank">http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPSetNormType.html</a><br>
>> >> >> ><br>
>> >> >> > or use<br>
>> >> >> ><br>
>> >> >> > -ksp_monitor_true_residual<br>
>> >> >> ><br>
>> >> >> > Thanks,<br>
>> >> >> ><br>
>> >> >> > Matt<br>
>> >> >> ><br>
>> >> >> >><br>
>> >> >> >> [0] KSPConvergedDefault(): Linear solver has converged. Residual<br>
>> >> >> >> norm<br>
>> >> >> >> 2.036064453512e-02 is less than relative tolerance<br>
>> >> >> >> 9.999999960042e-13<br>
>> >> >> >> times initial right hand side norm 2.223619476717e+10 at<br>
>> >> >> >> iteration 6<br>
>> >> >> >><br>
>> >> >> >><br>
>> >> >> >> Regards,<br>
>> >> >> >> Nori<br>
>> >> >> >><br>
>> >> >> >> --<br>
>> >> >> >> Norihiro Watanabe<br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> ><br>
>> >> >> > --<br>
>> >> >> > What most experimenters take for granted before they begin their<br>
>> >> >> > experiments<br>
>> >> >> > is infinitely more interesting than any results to which their<br>
>> >> >> > experiments<br>
>> >> >> > lead.<br>
>> >> >> > -- Norbert Wiener<br>
>> >> >><br>
>> >> >><br>
>> >> >><br>
>> >> >> --<br>
>> >> >> Norihiro Watanabe<br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> ><br>
>> >> > --<br>
>> >> > What most experimenters take for granted before they begin their<br>
>> >> > experiments<br>
>> >> > is infinitely more interesting than any results to which their<br>
>> >> > experiments<br>
>> >> > lead.<br>
>> >> > -- Norbert Wiener<br>
>> >><br>
>> >><br>
>> >><br>
>> >> --<br>
>> >> Norihiro Watanabe<br>
>> ><br>
>> ><br>
>> ><br>
>> ><br>
>> > --<br>
>> > What most experimenters take for granted before they begin their<br>
>> > experiments<br>
>> > is infinitely more interesting than any results to which their<br>
>> > experiments<br>
>> > lead.<br>
>> > -- Norbert Wiener<br>
>><br>
>><br>
>><br>
>> --<br>
>> Norihiro Watanabe<br>
><br>
><br>
><br>
<span class="HOEnZb"><font color="#888888">><br>
> --<br>
> What most experimenters take for granted before they begin their experiments<br>
> is infinitely more interesting than any results to which their experiments<br>
> lead.<br>
> -- Norbert Wiener<br>
<br>
<br>
<br>
--<br>
Norihiro Watanabe<br>
</font></span></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>