<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;"><div><br></div> Method 1 and 2 are unlikely to work.<div><br></div><div> It sounds like your matrix is (in exact precision) singular, but using 128 bit floats allows a "stable" factorization to go through giving you a descent direction for Newton.</div><div><br></div><div> I think you really need to fix the singularity at the modeling level, it is not robust to fix it at the numerical algorithm level. If you know the exact form of the null spaces you can use MatSetNullSpace() but you cannot use a direct solver for the system since the factorization will still see the singular matrix.</div><div><br></div><div> Barry</div><div><br><div><br><blockquote type="cite"><div>On Sep 14, 2023, at 12:30 PM, Gong Ding <gongding@cn.cogenda.com> wrote:</div><br class="Apple-interchange-newline"><div><meta charset="UTF-8"><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">The physical problem itself is ill-conditioned since there are floating regions in the simulation domain.<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">I use MUMPS as 64 bit LU solver, and a special improved SuperLU as 128 bit LU solver (<a class="moz-txt-link-freetext" href="https://github.com/cogenda/superlu">https://github.com/cogenda/superlu</a>, added float128 support).<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Although 128 bit solver works, it is 10x slower.<span class="Apple-converted-space"> </span><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">I'd like to try, if jacobian can be processed under 64 bit precision while keeps the Newton iteration convergence.<span class="Apple-converted-space"> </span><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Method 1:<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Use a block inversion of the main diagonal of jacobian as preconditioner (or ILU? ). Then factorize M*J.<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Both the precondition matrix and jacobian matrix are 64 bit.<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Method 2:<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">Do a 64 bit LU factorization of jacobian matrix, and use the factorization result as a preconditioner for higher precision krylov solver (such as iterative refinement)<br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div style="margin-top: 0px; margin-bottom: 0px; caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><br></div><div class="moz-cite-prefix" style="caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;">On 2023/9/14 23:05, Zhang, Hong wrote:<br></div><blockquote type="cite" cite="mid:SA1PR09MB8607F977CB20282B0D9A12C688F7A@SA1PR09MB8607.namprd09.prod.outlook.com" style="font-family: Helvetica; font-size: 18px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;"><div class="elementToProof" style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt;"><span class="ContentPasted0" style="color: rgb(36, 36, 36); background-color: rgb(255, 255, 255);">Gong Ding,</span><br></div><div class="elementToProof" style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt;"><span class="ContentPasted0" style="color: rgb(36, 36, 36); background-color: rgb(255, 255, 255);">When you use a LU solver, the preconditioner M = inv(LU) = inv (J) on theory. I suspect your jacobian evaluation by<span class="ContentPasted1" style="background-color: rgb(255, 255, 255); display: inline !important;"> 64bit might be inaccurate. What LU solver did you use? Run your code with option '-snes_view -snes_monitor -ksp_monitor' and compare the displays.</span></span></div><div class="elementToProof" style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt;"><span class="ContentPasted0" style="color: rgb(36, 36, 36); background-color: rgb(255, 255, 255);"><span class="ContentPasted1" style="background-color: rgb(255, 255, 255); display: inline !important;">Hong</span></span></div><hr tabindex="-1" style="display: inline-block; width: 925.109375px;"><div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size: 11pt;"><b>From:</b><span class="Apple-converted-space"> </span>petsc-users<span class="Apple-converted-space"> </span><a class="moz-txt-link-rfc2396E" href="mailto:petsc-users-bounces@mcs.anl.gov"><petsc-users-bounces@mcs.anl.gov></a><span class="Apple-converted-space"> </span>on behalf of Mark Adams<span class="Apple-converted-space"> </span><a class="moz-txt-link-rfc2396E" href="mailto:mfadams@lbl.gov"><mfadams@lbl.gov></a><br><b>Sent:</b><span class="Apple-converted-space"> </span>Thursday, September 14, 2023 5:35 AM<br><b>To:</b><span class="Apple-converted-space"> </span>Gong Ding<span class="Apple-converted-space"> </span><a class="moz-txt-link-rfc2396E" href="mailto:gongding@cn.cogenda.com"><gongding@cn.cogenda.com></a><br><b>Cc:</b><span class="Apple-converted-space"> </span><a class="moz-txt-link-abbreviated" href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a><span class="Apple-converted-space"> </span><a class="moz-txt-link-rfc2396E" href="mailto:petsc-users@mcs.anl.gov"><petsc-users@mcs.anl.gov></a><br><b>Subject:</b><span class="Apple-converted-space"> </span>Re: [petsc-users] Is precondition works for ill-conditioned jacobian matrix</font><div> </div></div><div><div dir="ltr">I would first verify that you are happy with the solution that works.<div><br></div><div>Next, I would worry about losing accuracy in computing M*J, but you could try it and search for any related work. There may be some tricks.</div><div><br></div><div>And MUMPS is good at high accuracy, you might try that and if it fails look at the MUMPS docs for any flags for high-accuracy.</div><div><br></div><div>Good luck,</div><div>Mark</div></div><br><div class="x_gmail_quote"><div dir="ltr" class="x_gmail_attr">On Thu, Sep 14, 2023 at 5:35 AM Gong Ding <<a href="mailto:gongding@cn.cogenda.com" moz-do-not-send="true" class="moz-txt-link-freetext">gongding@cn.cogenda.com</a>> wrote:<br></div><blockquote class="x_gmail_quote" style="margin: 0px 0px 0px 0.8ex; border-left-width: 1px; border-left-style: solid; border-left-color: rgb(204, 204, 204); padding-left: 1ex;">Hi all<br><br>I find such a nonlinear problem, the jacobian matrix is ill conditioned.<br><br>Solve the jacobian matrix by 64bit LU solver, the Newton method failed<span class="Apple-converted-space"> </span><br>to convergence.<br><br>However, when solve the jacobian matrix by 128bit LU solver , Newton<span class="Apple-converted-space"> </span><br>iteration will convergence.<br><br>I think this phenomena indicate that , the jacobian matrix is ill<span class="Apple-converted-space"> </span><br>conditioned.<br><br><br>The question is, if I do a precondition as M*J*dx = -M*f(x), here M is<span class="Apple-converted-space"> </span><br>the precondition matrix, . then I solve the matrix A=M*J by a LU solver.<br><br>Can I expect that solve A=M*J has a better precision result that help<span class="Apple-converted-space"> </span><br>the convergence of Newton iteration?<br><br>Gong Ding</blockquote></div></div></blockquote></div></blockquote></div><br></div></body></html>