[petsc-users] Issue using multi-grid as a pre-conditioner with KSP for a Poisson problem
Barry Smith
bsmith at mcs.anl.gov
Mon Jul 3 15:21:09 CDT 2017
> On Jul 3, 2017, at 12:19 PM, Jed Brown <jed at jedbrown.org> wrote:
>
> Scaling by the volume element causes the rediscretized coarse grid
> problem to be scaled like a Galerkin coarse operator. This is done
> automatically when you use finite element methods.
Galerkin coarse grid operator means that P A_c^-1R projects the error onto the coarse grid space. If A_c has "the wrong scaling" then P A_c^-1R still projects the error but then scales the result incorrectly so the final correction computed is either much to large or to small. The option -pc_mg_galerkin computes A_c as R A P which automatically picks the right scaling.
This is definitely not the concept of renormalization; that is much more complex and not related to Poisson problems.
Barry
>
> Jason Lefley <jason.lefley at aclectic.com> writes:
>
>>> On Jun 26, 2017, at 7:52 PM, Matthew Knepley <knepley at gmail.com> wrote:
>>>
>>> On Mon, Jun 26, 2017 at 8:37 PM, Jason Lefley <jason.lefley at aclectic.com <mailto:jason.lefley at aclectic.com>> wrote:
>>>> Okay, when you say a Poisson problem, I assumed you meant
>>>>
>>>> div grad phi = f
>>>>
>>>> However, now it appears that you have
>>>>
>>>> div D grad phi = f
>>>>
>>>> Is this true? It would explain your results. Your coarse operator is inaccurate. AMG makes the coarse operator directly
>>>> from the matrix, so it incorporates coefficient variation. Galerkin projection makes the coarse operator using R A P
>>>> from your original operator A, and this is accurate enough to get good convergence. So your coefficient representation
>>>> on the coarse levels is really bad. If you want to use GMG, you need to figure out how to represent the coefficient on
>>>> coarser levels, which is sometimes called "renormalization".
>>>>
>>>> Matt
>>>
>>> I believe we are solving the first one. The discretized form we are using is equation 13 in this document: https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/Projects/prjpoisson.pdf <https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/Projects/prjpoisson.pdf> Would you clarify why you think we are solving the second equation?
>>>
>>> Something is wrong. The writeup is just showing the FD Laplacian. Can you take a look at SNES ex5, and let
>>> me know how your problem differs from that one? There were use GMG and can converge is a few (5-6) iterates,
>>> and if you use FMG you converge in 1 iterate. In fact, that is in my class notes on the CAAM 519 website. Its possible
>>> that you have badly scaled boundary values, which can cause convergence to deteriorate.
>>>
>>> Thanks,
>>>
>>> Matt
>>>
>>
>> I went through ex5 and some of the other Poisson/multigrid examples again and noticed that they arrange the coefficients in a particular way.
>>
>> Our original attempt (solver_test.c) and some related codes that solve similar problems use an arrangement like this:
>>
>>
>> u(i-1,j,k) - 2u(i,j,k) + u(i+1,j,k) u(i,j-1,k) - 2u(i,j,k) + u(i,j+1,k) u(i,j,k-1) - 2u(i,j,k) + u(i,j,k+1)
>> ---------------------------------------- + ---------------------------------------- + ---------------------------------------- = f
>> dx^2 dy^2 dz^2
>>
>> That results in the coefficient matrix containing -2 * (1/dx^2 + 1/dy^2 + 1/dz^2) on the diagonal and 1/dx^2, 1/dy^2 and 1/dz^2 on the off-diagonals. I’ve also looked at some codes that assume h = dx = dy = dz, multiply f by h^2 and then use -6 and 1 for the coefficients in the matrix.
>>
>> It looks like snes ex5, ksp ex32, and ksp ex34 rearrange the terms like this:
>>
>>
>> dy dz (u(i-1,j,k) - 2u(i,j,k) + u(i+1,j,k)) dx dz (u(i,j-1,k) - 2u(i,j,k) + u(i,j+1,k)) dx dy (u(i,j,k-1) - 2u(i,j,k) + u(i,j,k+1))
>> --------------------------------------------------- + --------------------------------------------------- + --------------------------------------------------- = f dx dy dz
>> dx dy dz
>>
>>
>> I changed our code to use this approach and we observe much better convergence with the mg pre-conditioner. Is this renormalization? Can anyone explain why this change has such an impact on convergence with geometric multigrid as a pre-conditioner? It does not appear that the arrangement of coefficients affects convergence when using conjugate gradient without a pre-conditioner. Here’s output from some runs with the coefficients and right hand side modified as described above:
>>
>>
>> $ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 5 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_self_scale -mg_levels_ksp_max_it 5
>> right hand side 2 norm: 0.000244141
>> right hand side infinity norm: 4.76406e-07
>> 0 KSP preconditioned resid norm 3.578255383614e+00 true resid norm 2.441406250000e-04 ||r(i)||/||b|| 1.000000000000e+00
>> 1 KSP preconditioned resid norm 1.385321366208e-01 true resid norm 4.207234652404e-05 ||r(i)||/||b|| 1.723283313625e-01
>> 2 KSP preconditioned resid norm 4.459925861922e-03 true resid norm 1.480495515589e-06 ||r(i)||/||b|| 6.064109631854e-03
>> 3 KSP preconditioned resid norm 4.311025848794e-04 true resid norm 1.021041953365e-07 ||r(i)||/||b|| 4.182187840984e-04
>> 4 KSP preconditioned resid norm 1.619865162873e-05 true resid norm 5.438265013849e-09 ||r(i)||/||b|| 2.227513349673e-05
>> Linear solve converged due to CONVERGED_RTOL iterations 4
>> KSP final norm of residual 5.43827e-09
>> Residual 2 norm 5.43827e-09
>> Residual infinity norm 6.25328e-11
>>
>>
>> $ mpirun -n 16 ./solver_test -da_grid_x 128 -da_grid_y 128 -da_grid_z 128 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 5 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_self_scale -mg_levels_ksp_max_it 5 -pc_mg_type full
>> 0 KSP preconditioned resid norm 3.459879233358e+00 true resid norm 2.441406250000e-04 ||r(i)||/||b|| 1.000000000000e+00
>> 1 KSP preconditioned resid norm 1.169574216505e-02 true resid norm 4.856676267753e-06 ||r(i)||/||b|| 1.989294599272e-02
>> 2 KSP preconditioned resid norm 1.158728408668e-04 true resid norm 1.603345697667e-08 ||r(i)||/||b|| 6.567303977645e-05
>> 3 KSP preconditioned resid norm 6.035498575583e-07 true resid norm 1.613378731540e-10 ||r(i)||/||b|| 6.608399284389e-07
>> Linear solve converged due to CONVERGED_RTOL iterations 3
>> KSP final norm of residual 1.61338e-10
>> Residual 2 norm 1.61338e-10
>> Residual infinity norm 1.95499e-12
>>
>>
>> $ mpirun -n 64 ./solver_test -da_grid_x 512 -da_grid_y 512 -da_grid_z 512 -ksp_monitor_true_residual -pc_type mg -ksp_type cg -pc_mg_levels 8 -mg_levels_ksp_type richardson -mg_levels_ksp_richardson_self_scale -mg_levels_ksp_max_it 5 -pc_mg_type full -bc_type neumann
>> right hand side 2 norm: 3.05176e-05
>> right hand side infinity norm: 7.45016e-09
>> 0 KSP preconditioned resid norm 5.330711358065e+01 true resid norm 3.051757812500e-05 ||r(i)||/||b|| 1.000000000000e+00
>> 1 KSP preconditioned resid norm 4.687628546610e-04 true resid norm 2.452752396888e-08 ||r(i)||/||b|| 8.037179054124e-04
>> Linear solve converged due to CONVERGED_RTOL iterations 1
>> KSP final norm of residual 2.45275e-08
>> Residual 2 norm 2.45275e-08
>> Residual infinity norm 8.41572e-10
More information about the petsc-users
mailing list