<html><head><meta http-equiv="Content-Type" content="text/html; charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Sep 30, 2021, at 6:16 PM, Amneet Bhalla <<a href="mailto:mail2amneet@gmail.com" class="">mail2amneet@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class=""> </div><div class="">>> If you want to solve systems accurately, you should non-dimensionalize the system prior to discretization. This would mean that</div><div class="">your C and b have elements in the [1, D] range, where D is the dynamic range of your problem, say 1e4, rather than these huge</div><div class="">numbers you have now.</div><div class=""><br class=""></div><div class="">@Matt: We have done non-dimensionalization and the diagonal matrix ranges from 1 to 1e4 now. Still it takes 4-5 iterations to converge for the non-dimensional diagonal matrix. The convergence trend is looking much better now, though:</div><div class=""><br class=""></div><div class=""><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class="">Residual norms for temperature_ solve.</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>0 KSP preconditioned resid norm 4.724547545716e-04 true resid norm 2.529423250889e+00 ||r(i)||/||b|| 4.397759655853e-05</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>1 KSP preconditioned resid norm 6.504853596318e-06 true resid norm 2.197130494439e-02 ||r(i)||/||b|| 3.820021755431e-07</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>2 KSP preconditioned resid norm 7.733420341215e-08 true resid norm 3.539290481432e-04 ||r(i)||/||b|| 6.153556501117e-09</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>3 KSP preconditioned resid norm 6.419092250844e-10 true resid norm 5.220398494466e-06 ||r(i)||/||b|| 9.076400273607e-11</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>4 KSP preconditioned resid norm 5.095955157158e-12 true resid norm 2.484163999489e-08 ||r(i)||/||b|| 4.319070053474e-13</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>5 KSP preconditioned resid norm 6.828200916501e-14 true resid norm 2.499229854610e-10 ||r(i)||/||b|| 4.345264170970e-15</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class="">Linear temperature_ solve converged due to CONVERGED_RTOL iterations 5</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><br class=""></div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><br class=""></div></div><div class="">Only when all the equations are scaled individually the convergence is achieved in a single iteration. In the above, all equations are scaled using the same non-dimensional parameter. Do you think this is reasonable or do you expect the diagonal system to converge in a single iteration irrespective of the range of diagonal entries? </div></div></div></blockquote><div><br class=""></div> For a diagonal system with this modest range of values Jacobi should converge in a single iteration. </div><div><br class=""></div><div> The output below is confusing, it is a system with 1 variable and should definitely converge in one iterations. </div><div><br class=""></div><div> I am concerned we may be talking apples and oranges here and your test may not be as simple as you think it is (with regard to the diagonal).</div><div><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><br class=""></div><div class="">@Barry: <br class=""></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><div class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class=""><br class=""></div></div></div></blockquote><div class=""><br class=""></div> What is the result of -ksp_view on the solve? </div></div></blockquote><div class=""><br class=""></div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class="">KSP Object: (temperature_) 1 MPI processes</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>type: gmres</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with one step of iterative refinement when needed</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>happy breakdown tolerance 1e-30</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>maximum iterations=1000, nonzero initial guess</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>tolerances:<span class="gmail-Apple-converted-space"> </span>relative=1e-12, absolute=1e-50, divergence=10000.</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>left preconditioning</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>using PRECONDITIONED norm type for convergence test</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class="">PC Object: (temperature_) 1 MPI processes</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>type: shell</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>IEPSemiImplicitHierarchyIntegrator::helmholtz_precond::Temperature</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>linear system matrix = precond matrix:</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>Mat Object: 1 MPI processes</div><div style="margin: 0px; font-stretch: normal; font-size: 13px; line-height: normal; font-family: Menlo; color: rgb(252, 33, 37);" class=""><span class="gmail-Apple-converted-space"> </span>type: shell</div>
<div class=""><span class="gmail-Apple-converted-space" style="color:rgb(252,33,37);font-family:Menlo;font-size:13px"> </span><span style="color:rgb(252,33,37);font-family:Menlo;font-size:13px" class="">rows=1, cols=1</span> </div><div class=""> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><div class=""><br class=""></div><div class=""> The way you describe your implementation it does not sound like standard PETSc practice. </div></div></blockquote><div class=""><br class=""></div><div class="">Yes, we do it differently in IBAMR. Succinctly, the main solver is a matrix-free one, whereas the preconditioner is a FAC multigrid solver with its bottom solver formed on the coarsest level of AMR grid using PETSc (matrix-based KSP). </div><div class=""><br class=""></div><div class="">In the above -ksp_view <span style="color:rgb(252,33,37);font-family:Menlo;font-size:13px" class="">temperature_ is the matrix-free KSP solver and </span><span style="color:rgb(252,33,37);font-family:Menlo;font-size:13px" class="">IEPSemiImplicitHierarchyIntegrator::helmholtz_precond is the FAC preconditioner.</span></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><div class=""><br class=""></div><div class="">With PETSc using a matrix-free operation mA and a matrix from which KSP will build the preconditioner A one uses KSPSetOperator(ksp,mA,A); and then just selects the preconditioner with -pc_type xxx For example to use Jacobi preconditioning one uses -pc_type jacobi (note that this only uses the diagonal of A, the rest of A is never used).</div></div></blockquote><div class=""><br class=""></div><div class="">We run -pc_type jacobi on the bottom solver of the FAC preconditioner. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><div class=""><br class=""></div><div class="">If you wish to precondition mA by fully solving with the matrix A one can use -ksp_monitor_true_residual -pc_type ksp -ksp_ksp_type yyy -ksp_pc_type xxx -ksp_ksp_monitor_true_residual with, for example, yyy of richardson and xxx of jacobi</div></div></blockquote><div class=""><br class=""></div><div class="">Yes, this is what we do. </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div style="word-wrap:break-word;line-break:after-white-space" class=""><div class=""><br class=""></div><div class=""> Barry</div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""><blockquote type="cite" class=""><div class=""><div dir="ltr" class=""><div class="">To verify that I am indeed solving a diagonal system I printed the PETSc matrix from the preconditioner and viewed it in Matlab. It indeed shows it to be a diagonal system. Attached is the plot of the spy command on the printed matrix. The matrix in binary form is also attached. </div><div class=""><br class=""></div><div class="">My understanding is that because the C coefficient is varying in 4 orders of magnitude, i.e., Max(C)/Min(C) ~ 10^4, the matrix is poorly scaled. When I rescale my matrix by 1/C then the system converges in 1 iteration as expected. Is my understanding correct, and that scaling 1/C should be done even for a diagonal system?</div><div class=""><br class=""></div><div class="">When D is non-zero, then scaling by 1/C seems to be very inconvenient as D is stored as side-centered data for the matrix free solver. </div><div class=""><br class=""></div><div class="">In the case that I do not scale my equations by 1/C, is there some solver setting that improves the convergence rate? (With D as non-zero, I have also tried gmres as the ksp solver in the matrix-based preconditioner to get better performance, but it didn't matter much.)</div><div class=""><br class=""></div><div class=""><br class=""></div><div class="">Thanks,</div><div class="">Ramakrishnan Thirumalaisamy</div><div class="">San Diego State University.</div></div>
<span id="gmail-m_-7898380077486584962cid:f_ku60x9u71" class=""><Temperature_fill.pdf></span><span id="gmail-m_-7898380077486584962cid:f_ku60x9tq0" class=""><matrix_temperature></span></div></blockquote></div><br class=""></div></blockquote></div><br clear="all" class=""><div class=""><br class=""></div>-- <br class=""><div dir="ltr" class="gmail_signature"><div dir="ltr" class=""><div class="">--Amneet <br class=""><br class=""></div><div class=""><br class=""></div><div class=""><br class=""></div></div></div></div>
</div></blockquote></div><br class=""></body></html>