[petsc-users] How to speed up geometric multigrid

Barry Smith bsmith at mcs.anl.gov
Wed Oct 2 13:39:12 CDT 2013


  Something is wrong, you should be getting better convergence. Please answer my other email.


On Oct 2, 2013, at 1:10 PM, Michele Rosso <mrosso at uci.edu> wrote:

> Thank you all for your contribution.
> So far the fastest solution is still the initial one proposed by Jed in an earlier round:
> 
> -ksp_atol 1e-9  -ksp_monitor_true_residual  -ksp_view  -log_summary -mg_coarse_pc_factor_mat_solver_package superlu_dist
> -mg_coarse_pc_type lu    -mg_levels_ksp_max_it 3 -mg_levels_ksp_type richardson  -options_left -pc_mg_galerkin
> -pc_mg_levels 5  -pc_mg_log  -pc_type mg
> 
> where I used  -mg_levels_ksp_max_it 3  as Barry suggested instead of  -mg_levels_ksp_max_it 1.
> I attached the diagnostics for this case. Any further idea?
> Thank you,
> 
> Michele
> 
> 
> On 10/01/2013 11:44 PM, Barry Smith wrote:
>> On Oct 2, 2013, at 12:28 AM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> 
>>> "Mark F. Adams" <mfadams at lbl.gov> writes:
>>>> run3.txt uses:
>>>> 
>>>> -ksp_type richardson
>>>> 
>>>> This is bad and I doubt anyone recommended it intentionally.
>>    Hell this is normal multigrid without a Krylov accelerator. Under normal circumstances with geometric multigrid this should be fine, often the best choice.
>> 
>>> I would have expected FGMRES, but Barry likes Krylov smoothers and
>>> Richardson is one of a few methods that can tolerate nonlinear
>>> preconditioners.
>>> 
>>>> You also have, in this file,
>>>> 
>>>> -mg_levels_ksp_type gmres
>>>> 
>>>> did you or the recommenders mean
>>>> 
>>>> -mg_levels_ksp_type richardson  ???
>>>> 
>>>> you are using gmres here, which forces you to use fgmres in the outer solver.  This is a safe thing to use you if you apply your BCa symmetrically with a low order discretization then
>>>> 
>>>> -ksp_type cg
>>>> -mg_levels_ksp_type richardson
>>>> -mg_levels_pc_type sor
>>>> 
>>>> is what I'd recommend.
>>> I thought that was tried in an earlier round.
>>> 
>>> I don't understand why SOR preconditioning in the Krylov smoother is so
>>> drastically more expensive than BJacobi/ILU and why SOR is called so
>>> many more times even though the number of outer iterations
>>> 
>>> bjacobi: PCApply              322 1.0 4.1021e+01 1.0 6.44e+09 1.0 3.0e+07 1.6e+03 4.5e+04 74 86 98 88 92 28160064317351226 20106
>>> bjacobi: KSPSolve              46 1.0 4.6268e+01 1.0 7.52e+09 1.0 3.0e+07 1.8e+03 4.8e+04 83100100 99 99 31670065158291309 20800
>>> 
>>> sor:     PCApply             1132 1.0 1.5532e+02 1.0 2.30e+10 1.0 1.0e+08 1.6e+03 1.6e+05 69 88 99 88 93 21871774317301274 18987
>>> sor:     KSPSolve             201 1.0 1.7101e+02 1.0 2.63e+10 1.0 1.1e+08 1.8e+03 1.7e+05 75100100 99 98 24081775248221352 19652
>> 
> 
> <best.txt>



More information about the petsc-users mailing list