[petsc-users] ML Optimization and local preconditioners

Dave May dave.mayhem23 at gmail.com
Fri May 20 01:10:02 CDT 2011


One of nice aspects of using ML compared to BoomerAMG is you have much
more control
over how you configure the smoother on each level.

ILU and ICC are not directly available in parallel, but you can use a
block variant of each,
which applies ILU/ICC on the diagonal block of the matrix which is
local to each processor.

To configure this, use these options
-mg_levels_pc_type bjacobi
-mg_levels_sub_pc_type ilu

or

-mg_levels_pc_type bjacobi
-mg_levels_sub_pc_type icc


Dave


On 20 May 2011 02:56, Li, Zhisong (lizs) <lizs at mail.uc.edu> wrote:
> Hi, Petsc Team,
>
> Recently I tested my 3D structured Poisson-style problem with ML and
> BoomerAMG preconditioner respectively.  In comparison, ML is more efficient
> in preconditioning and RAM usage, but it requires 2 times more iterations on
> the same KSP solver, bringing down the overall efficiency.  And both PCs
> don't scale well.  I wonder if there's any specific approach to optimizing
> ML to reduce KSP iterations by setting certain command line options.
>
> I also saw in some previous petsc mail archives mentioning the "local
> preconditioner".  As some important PC like PCILU and PCICC are not
> available for parallel processing, it may be beneficial to apply them as
> local preconditioners.  The question is how to setup a local preconditioner?
>
>
> Thank you veru much.
>
>
>
> Zhisong Li
>


More information about the petsc-users mailing list