[petsc-users] Null space and preconditioners
Mark Adams
mfadams at lbl.gov
Mon Mar 21 14:33:34 CDT 2022
(I did suggest CG, he just has a pressure solve, which is a Laplacian,
right?)
Ugh, this is pretty bad. The logic might be a bit convoluted, but
if SetOperator is called after SetFromOptions, as is usual I think, it
could check if it has left a left PC, if the operator has a null space.
On Mon, Mar 21, 2022 at 2:48 PM Barry Smith <bsmith at petsc.dev> wrote:
>
> Marco,
>
> I have confirmed your results.
>
> Urgg, it appears we do not have something well documented. The
> removal of the null space only works for left preconditioned solvers and
> FGMRES only works with right preconditioning. Here is the reasoning.
>
> The Krylov space for left preconditioning is built from [r, BAr,
> (BA)^2 r, ...] and the solution space is built from this basis. If A has a
> null space of n then the left preconditioned Krylov methods simply remove n
> from the "full" Krylov space after applying each B preconditioner and the
> resulting "reduced" Krylov space has no components in the n directions
> hence the solution built by GMRES naturally has no component in the n.
>
> But with right preconditioning the Krylov space is [s ABs (AB)^2 s,
> ....] We would need to remove B^-1 n from the Krylov space so that (A B)
> B^-1 n = 0 In general we don't have any way of applying B^-1 to a vector so
> we cannot create the appropriate "reduced" Krylov space.
>
> If I run with GMRES (which defaults to left preconditioner) and the
> options ./testPreconditioners -pc_type gamg -ksp_type gmres
> -ksp_monitor_true_residual -ksp_rtol 1.e-12 -ksp_view -mg_coarse_pc_type svd
>
> Then it handles the null space correctly and the solution has Solution
> mean = 4.51028e-17
>
> Is there any reason to use FGMRES instead of GMRES? You just cannot use
> GMRES as the smoother inside GAMG if you use GMRES on the outside, but for
> pressure equations you don't want use such a strong smoother anyways.
>
> Barry
>
> I feel we should add some information to the documentation on the
> removal of the null space to the user's manual when using right
> preconditioning and maybe even have an error check in the code so that
> people don't fall into this trap. But I am not sure exactly what to do.
> When the A and B are both symmetric I think special stuff happens that
> doesn't require providing a null space; but I am not sure.
>
>
>
>
>
> On Mar 21, 2022, at 12:41 PM, Marco Cisternino <
> marco.cisternino at optimad.it> wrote:
>
> Thank you, Mark.
> However, doing this with my toy code
> mpirun -n 1 ./testpreconditioner -pc_type gamg
> -pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi
> -mg_coarse_ksp_type cg
>
> I get 16 inf elements. Do I miss anything?
>
> Thanks again
>
> Marco Cisternino
>
>
> *From:* Mark Adams <mfadams at lbl.gov>
> *Sent:* lunedì 21 marzo 2022 17:31
> *To:* Marco Cisternino <marco.cisternino at optimad.it>
> *Cc:* petsc-users at mcs.anl.gov
> *Subject:* Re: [petsc-users] Null space and preconditioners
>
> And for GAMG you can use:
>
> -pc_gamg_use_parallel_coarse_grid_solver -mg_coarse_pc_type jacobi
> -mg_coarse_ksp_type cg
>
> Note if you are using more that one MPI process you can use 'lu' instead
> of 'jacobi'
>
> If GAMG converges fast enough it can solve before the constant creeps in
> and works without cleaning in the KSP method.
>
> On Mon, Mar 21, 2022 at 12:06 PM Mark Adams <mfadams at lbl.gov> wrote:
>
> The solution for Neumann problems can "float away" if the constant is not
> controlled in some way because floating point errors can introduce it even
> if your RHS is exactly orthogonal to it.
>
> You should use a special coarse grid solver for GAMG but it seems to be
> working for you.
>
> I have lost track of the simply way to have the KSP solver clean the
> constant out, which is what you want.
>
> can someone help Marco?
>
> Mark
>
>
>
>
>
> On Mon, Mar 21, 2022 at 8:18 AM Marco Cisternino <
> marco.cisternino at optimad.it> wrote:
>
> Good morning,
> I’m observing an unexpected (to me) behaviour of my code.
> I tried to reduce the problem in a toy code here attached.
> The toy code archive contains a small main, a matrix and a rhs.
> The toy code solves the linear system and check the norms and the mean of
> the solution.
> The problem into the matrix and the rhs is the finite volume
> discretization of the pressure equation of an incompressible NS solver.
> It has been cooked as tiny as possible (16 cells!).
> It is important to say that it is an elliptic problem with homogeneous
> Neumann boundary conditions only, for this reason the toy code sets a null
> space containing the constant.
>
> The unexpected (to me) behaviour is evident by launching the code using
> different preconditioners, using -pc-type <pctype>
> I tested using PCNONE (“none”), PCGAMG (“gamg”) and PCILU (“ilu”). The
> default solver is KSPFGMRES.
> Using the three PC, I get 3 different solutions. It seems to me that they
> differ in the mean value, but GAMG is impressive.
> PCNONE gives me the zero mean solution I expected. What about the others?
>
> Asking for residuals monitor, the ratio ||r||/||b|| shows convergence for
> PCNONE and PCILU (~10^-16), but it stalls for PCGAMG (~10^-4).
> I cannot see why. Am I doing anything wrong or incorrectly thinking about
> the expected behaviour?
>
> Generalizing to larger mesh the behaviour is similar.
>
> Thank you for any help.
>
> Marco Cisternino
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220321/4c0aab1c/attachment-0001.html>
More information about the petsc-users
mailing list