<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div dir="auto" style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class=""><div><blockquote type="cite" class=""><div dir="ltr" style="font-family: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px;" class=""><div class="gmail_extra"><div class="gmail_quote"><div class="">Okay, when you say a Poisson problem, I assumed you meant</div><div class=""><br class=""></div><div class=""> div grad phi = f</div><div class=""><br class=""></div><div class="">However, now it appears that you have</div><div class=""><br class=""></div><div class=""> div D grad phi = f</div><div class=""><br class=""></div><div class="">Is this true? It would explain your results. Your coarse operator is inaccurate. AMG makes the coarse operator directly</div><div class="">from the matrix, so it incorporates coefficient variation. Galerkin projection makes the coarse operator using R A P</div><div class="">from your original operator A, and this is accurate enough to get good convergence. So your coefficient representation</div><div class="">on the coarse levels is really bad. If you want to use GMG, you need to figure out how to represent the coefficient on</div><div class="">coarser levels, which is sometimes called "renormalization".</div><div class=""><br class=""></div><div class=""> Matt</div></div></div></div></blockquote><div><br class=""></div><div>I believe we are solving the first one. The discretized form we are using is equation 13 in this document: <a href="https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/Projects/prjpoisson.pdf" class="">https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/Projects/prjpoisson.pdf</a> Would you clarify why you think we are solving the second equation?</div><div><br class=""></div><div>I looked at some other code that uses geometric multi-grid to solve the same problem and the authors assumed a uniform cell width and subsequently factored out the cell width and moved it to the right hand side. I did that in our code (using scale = 1 rather than 1/h^2) and we see better convergence with the mg pre-conditioner:</div><div><br class=""></div><div><div>$ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 5 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_self_scale -mg_levels_ksp_max_it 5</div><div>right hand side 2 norm: 512.</div><div>right hand side infinity norm: 0.999097</div><div> 0 KSP preconditioned resid norm 3.434682678336e+05 true resid norm 5.120000000000e+02 ||r(i)||/||b|| 1.000000000000e+00</div><div> 1 KSP preconditioned resid norm 7.305460801248e+04 true resid norm 1.633308409885e+02 ||r(i)||/||b|| 3.190055488056e-01</div><div> 2 KSP preconditioned resid norm 6.656705346262e+03 true resid norm 7.942064132554e+01 ||r(i)||/||b|| 1.551184400890e-01</div><div> 3 KSP preconditioned resid norm 3.523570204496e+03 true resid norm 4.046777759638e+01 ||r(i)||/||b|| 7.903862811794e-02</div><div> 4 KSP preconditioned resid norm 9.123967354323e+03 true resid norm 2.445432977350e+01 ||r(i)||/||b|| 4.776236283887e-02</div><div> 5 KSP preconditioned resid norm 9.811672802436e+02 true resid norm 1.556881108574e+01 ||r(i)||/||b|| 3.040783415183e-02</div><div> 6 KSP preconditioned resid norm 1.106193887270e+03 true resid norm 8.752912569969e+00 ||r(i)||/||b|| 1.709553236322e-02</div><div> 7 KSP preconditioned resid norm 3.411263151853e+02 true resid norm 4.817172861959e+00 ||r(i)||/||b|| 9.408540746014e-03</div><div> 8 KSP preconditioned resid norm 1.129663122476e+02 true resid norm 2.051711481120e+00 ||r(i)||/||b|| 4.007248986563e-03</div><div> 9 KSP preconditioned resid norm 7.776030229135e+01 true resid norm 1.092336734730e+00 ||r(i)||/||b|| 2.133470185019e-03</div><div> 10 KSP preconditioned resid norm 3.900236414632e+01 true resid norm 4.662658178376e-01 ||r(i)||/||b|| 9.106754254641e-04</div><div> 11 KSP preconditioned resid norm 2.884248061867e+01 true resid norm 2.584775749590e-01 ||r(i)||/||b|| 5.048390135919e-04</div><div> 12 KSP preconditioned resid norm 1.275086146987e+01 true resid norm 1.183721340034e-01 ||r(i)||/||b|| 2.311955742253e-04</div><div> 13 KSP preconditioned resid norm 3.378721119782e+00 true resid norm 5.841425568821e-02 ||r(i)||/||b|| 1.140903431410e-04</div><div>Linear solve converged due to CONVERGED_RTOL iterations 13</div><div>KSP final norm of residual 0.0584143</div><div>Residual 2 norm 0.0584143</div><div>Residual infinity norm 0.000458905</div><div><br class=""></div><div>While this looks much better than our previous attempts, I think we might end up using the algebraic approach for generating the intermediate operators (either gamg or mg with -pc_mg_galerkin) since we want to support specification of boundary conditions other than just at the extents of the domain and it’s not clear how to handle the coarsening when boundary cells and non-boundary cells are combined into a single cell on the coarser grid.</div><div><br class=""></div><div>I’d still like to understand the renormalization you mentioned. Do you know of any resources that discuss it?</div><div><br class=""></div><div>Thanks</div></div></div></div></body></html>