<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Jul 3, 2017 at 10:06 AM, Jason Lefley <span dir="ltr"><<a href="mailto:jason.lefley@aclectic.com" target="_blank">jason.lefley@aclectic.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><br><div><blockquote type="cite"><div>On Jun 26, 2017, at 7:52 PM, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> wrote:</div><br class="m_-4488310209958990110Apple-interchange-newline"><div><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Jun 26, 2017 at 8:37 PM, Jason Lefley <span dir="ltr"><<a href="mailto:jason.lefley@aclectic.com" target="_blank">jason.lefley@aclectic.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><div><blockquote type="cite"><div dir="ltr" style="font-family:Helvetica;font-size:12px;font-style:normal;font-variant-caps:normal;font-weight:normal;letter-spacing:normal;text-align:start;text-indent:0px;text-transform:none;white-space:normal;word-spacing:0px"><div class="gmail_extra"><div class="gmail_quote"><div>Okay, when you say a Poisson problem, I assumed you meant</div><div><br></div><div> div grad phi = f</div><div><br></div><div>However, now it appears that you have</div><div><br></div><div> div D grad phi = f</div><div><br></div><div>Is this true? It would explain your results. Your coarse operator is inaccurate. AMG makes the coarse operator directly</div><div>from the matrix, so it incorporates coefficient variation. Galerkin projection makes the coarse operator using R A P</div><div>from your original operator A, and this is accurate enough to get good convergence. So your coefficient representation</div><div>on the coarse levels is really bad. If you want to use GMG, you need to figure out how to represent the coefficient on</div><div>coarser levels, which is sometimes called "renormalization".</div><div><br></div><div> Matt</div></div></div></div></blockquote><div><br></div><div>I believe we are solving the first one. The discretized form we are using is equation 13 in this document: <a href="https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/Projects/prjpoisson.pdf" target="_blank">https://www.rsmas.mi<wbr>ami.edu/users/miskandarani/Cou<wbr>rses/MSC321/Projects/prjpoisso<wbr>n.pdf</a> Would you clarify why you think we are solving the second equation?</div></div></div></div></blockquote><div><br></div><div> Something is wrong. The writeup is just showing the FD Laplacian. Can you take a look at SNES ex5, and let</div><div>me know how your problem differs from that one? There were use GMG and can converge is a few (5-6) iterates,</div><div>and if you use FMG you converge in 1 iterate. In fact, that is in my class notes on the CAAM 519 website. Its possible</div><div>that you have badly scaled boundary values, which can cause convergence to deteriorate.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div></div></div></div></div></blockquote><br></div><div>I went through ex5 and some of the other Poisson/multigrid examples again and noticed that they arrange the coefficients in a particular way.</div><div><div><br></div><div>Our original attempt (solver_test.c) and some related codes that solve similar problems use an arrangement like this:</div><div><br></div><div><font face="Consolas"><br></font></div><div><font face="Consolas"> u(i-1,j,k) - 2u(i,j,k) + u(i+1,j,k) u(i,j-1,k) - 2u(i,j,k) + u(i,j+1,k) u(i,j,k-1) - 2u(i,j,k) + u(i,j,k+1) </font></div><div><font face="Consolas">------------------------------<wbr>---------- + ------------------------------<wbr>---------- + ------------------------------<wbr>---------- = f</font></div><div><font face="Consolas"> dx^2 dy^2 dz^2</font></div><div><br></div><div>That results in the coefficient matrix containing -2 * (1/dx^2 + 1/dy^2 + 1/dz^2) on the diagonal and 1/dx^2, 1/dy^2 and 1/dz^2 on the off-diagonals. I’ve also looked at some codes that assume h = dx = dy = dz, multiply f by h^2 and then use -6 and 1 for the coefficients in the matrix.</div><div><br></div><div>It looks like snes ex5, ksp ex32, and ksp ex34 rearrange the terms like this:</div><div><br></div><div><br></div><div><div><font face="Consolas"> dy dz (u(i-1,j,k) - 2u(i,j,k) + u(i+1,j,k)) dx dz (u(i,j-1,k) - 2u(i,j,k) + u(i,j+1,k)) dx dy (u(i,j,k-1) - 2u(i,j,k) + u(i,j,k+1))</font></div><div><font face="Consolas">------------------------------<wbr>--------------------- + ------------------------------<wbr>--------------------- + ------------------------------<wbr>--------------------- = f dx dy dz</font></div><div><font face="Consolas"> dx dy dz</font></div></div><div><br></div><div><br></div><div>I changed our code to use this approach and we observe much better convergence with the mg pre-conditioner. Is this renormalization?</div></div></div></div></div></div></blockquote><div><br></div><div>No, this is proper scaling of the boundary conditions I believe.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><div dir="auto" style="word-wrap:break-word"><div><div> Can anyone explain why this change has such an impact on convergence with geometric multigrid as a pre-conditioner? It does not appear that the arrangement of coefficients affects convergence when using conjugate gradient without a pre-conditioner. Here’s output from some runs with the coefficients and right hand side modified as described above:</div><div><br></div><div><br></div><div><div>$ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 5 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_<wbr>self_scale -mg_levels_ksp_max_it 5</div><div>right hand side 2 norm: 0.000244141</div><div>right hand side infinity norm: 4.76406e-07</div><div> 0 KSP preconditioned resid norm 3.578255383614e+00 true resid norm 2.441406250000e-04 ||r(i)||/||b|| 1.000000000000e+00</div><div> 1 KSP preconditioned resid norm 1.385321366208e-01 true resid norm 4.207234652404e-05 ||r(i)||/||b|| 1.723283313625e-01</div><div> 2 KSP preconditioned resid norm 4.459925861922e-03 true resid norm 1.480495515589e-06 ||r(i)||/||b|| 6.064109631854e-03</div><div> 3 KSP preconditioned resid norm 4.311025848794e-04 true resid norm 1.021041953365e-07 ||r(i)||/||b|| 4.182187840984e-04</div><div> 4 KSP preconditioned resid norm 1.619865162873e-05 true resid norm 5.438265013849e-09 ||r(i)||/||b|| 2.227513349673e-05</div><div>Linear solve converged due to CONVERGED_RTOL iterations 4</div><div>KSP final norm of residual 5.43827e-09</div><div>Residual 2 norm 5.43827e-09</div><div>Residual infinity norm 6.25328e-11</div></div><div><br></div><div><br></div><div><div>$ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 5 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_<wbr>self_scale -mg_levels_ksp_max_it 5 -pc_mg_type full</div><div> 0 KSP preconditioned resid norm 3.459879233358e+00 true resid norm 2.441406250000e-04 ||r(i)||/||b|| 1.000000000000e+00</div><div> 1 KSP preconditioned resid norm 1.169574216505e-02 true resid norm 4.856676267753e-06 ||r(i)||/||b|| 1.989294599272e-02</div><div> 2 KSP preconditioned resid norm 1.158728408668e-04 true resid norm 1.603345697667e-08 ||r(i)||/||b|| 6.567303977645e-05</div><div> 3 KSP preconditioned resid norm 6.035498575583e-07 true resid norm 1.613378731540e-10 ||r(i)||/||b|| 6.608399284389e-07</div><div>Linear solve converged due to CONVERGED_RTOL iterations 3</div><div>KSP final norm of residual 1.61338e-10</div><div>Residual 2 norm 1.61338e-10</div><div>Residual infinity norm 1.95499e-12</div></div><div><br></div><div><br></div><div><div>$ mpirun -n 64 ./solver_test -da_grid_x 512 -da_grid_y 512 -da_grid_z 512 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 8 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_<wbr>self_scale -mg_levels_ksp_max_it 5 -pc_mg_type full -bc_type neumann</div><div>right hand side 2 norm: 3.05176e-05</div><div>right hand side infinity norm: 7.45016e-09</div><div> 0 KSP preconditioned resid norm 5.330711358065e+01 true resid norm 3.051757812500e-05 ||r(i)||/||b|| 1.000000000000e+00</div><div> 1 KSP preconditioned resid norm 4.687628546610e-04 true resid norm 2.452752396888e-08 ||r(i)||/||b|| 8.037179054124e-04</div><div>Linear solve converged due to CONVERGED_RTOL iterations 1</div><div>KSP final norm of residual 2.45275e-08</div><div>Residual 2 norm 2.45275e-08</div><div>Residual infinity norm 8.41572e-10</div></div></div></div></div></div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.caam.rice.edu/~mk51/" target="_blank">http://www.caam.rice.edu/~mk51/</a><br></div></div></div>
</div></div>