[petsc-users] Multigrid preconditioning of entire linear systems for discretized coupled multiphysics problems

Matthew Knepley knepley at gmail.com
Thu Mar 5 15:19:18 CST 2015


On Thu, Mar 5, 2015 at 3:12 PM, Fabian Gabel <gabel.fabian at gmail.com> wrote:

> On Di, 2015-03-03 at 13:02 -0600, Barry Smith wrote:
> > > On Mar 2, 2015, at 9:19 PM, Fabian Gabel <gabel.fabian at gmail.com>
> wrote:
> > >
> > > On Mo, 2015-03-02 at 19:43 -0600, Barry Smith wrote:
> > >>  Do you really want tolerances:  relative=1e-90, absolute=1.10423,
> divergence=10000? That is an absolute tolerance of 1.1? Normally that would
> be huge.
> > >
> > > I started using atol as convergence criterion with -ksp_norm_type
> > > unpreconditioned. The value of atol gets updated every outer iteration.
> >
> >    What is the "outer iteration" and why does it exist?
>
> I used the term "outer iteration" as in Ferziger/Peric "Computational
> Methods for Fluid Dynamics":
>
> "... there are two levels of iterations: inner iterations, within which
> the linear equations are solved, and outer iterations, that deal with
> the non-linearity and coupling of the equations."
>
> I chose the Picard-Iteration approach to linearize the non-linear terms
> of the Navier-Stokes equations. The solution process is as follows:
>
> 1) Update matrix coefficients for the linear system for (u,v,w,p)
> 2) Calculate the norm of the initial residual r^(n):=||b - Ax^(n) ||_2
> 3) if (r^(n)/r(0) <= 1-e8) GOTO 7)
> 4) Calculate atol = 0.1 * r^(n)
> 5) Call KSPSetTolerances with rtol=1e-90 and atol as above and solve the
> coupled linear system for (u,v,w,p)
> 6) GOTO 1)
> 7) END
>
> With an absolute tolerance for the norm of the unpreconditioned residual
> I feel like I have control over the convergence of the picard iteration.
> Using rtol for the norm of the preconditioned residual did not always
> achieve convergence.
>
> >
> > >
> > >>  You can provide your matrix with a block size that GAMG will use
> with MatSetBlockSize().
> > >
> > > I think something went wrong. Setting the block size to 4 and solving
> > > for (u,v,w,p) the convergence degraded significantly. I attached the
> > > results for a smaller test case that shows the increase of the number
> of
> > > needed inner iterations when setting the block size via
> > > MatSetBlockSize().
> >
> >   This is possible.
>
>
> Ok, so this concludes the experiments with out-of-the-box GAMG for the
> fully coupled Navier-Stokes System? Why would the GAMG not perform well
> on the linear system resulting from my discretization? What is the
> difference between the algebraic multigrid implemented in PETSc and the
> method used in the paper [1] I mentioned?
>

>From reading the abstract, these guys are doing SIMPLE, which is an
approximation
of the block systems generated by FieldSplit. This is explained in

  http://dl.acm.org/citation.cfm?id=1327899

You can use FieldSplit to get SIMPLE by using -ksp_type preonly -pc_type
jacobi in
the right places. I try and do this here:

  http://www.cs.uchicago.edu/~knepley/presentations/SIAMCSE13.pdf

Note that here AMG is only applied to the momentum equation.

  Matt


>
> I am still looking for a scalable solver configuration. So far ILU with
> various levels of fill has the best single process performance, but
> scalability could be better. I have been experimenting with different
> PCFIELDSPLIT configurations but the best configuration so far does not
> outperform ILU preconditioning.
>
> Fabian
>
> [1] http://dx.doi.org/10.1016/j.jcp.2008.08.027
>
> >
> >   Barry
> >
> > >
> > >>
> > >>  I would use coupledsolve_mg_coarse_sub_pc_type lu   it is weird that
> it is using SOR for 27 points.
> > >>
> > >> So you must have provided a null space since it printed "has attached
> null space"
> > >
> > > The system has indeed a one dimensional null space (from the pressure
> > > equation with Neumann boundary conditions). But now that you mentioned
> > > it: It seems that the outer GMRES doesn't notice that the matrix has an
> > > attached nullspace. Replacing
> > >
> > >     CALL MatSetNullSpace(CMAT,NULLSP,IERR)
> > >
> > > with
> > >
> > >     CALL KSPSetNullSpace(KRYLOV,NULLSP,IERR)
> > >
> > > solves this. What is wrong with using MatSetNullSpace?
> > >
> > >
> > > Fabian
> > >
> > >
> > >>
> > >>  Barry
> > >>
> > >>
> > >>
> > >>> On Mar 2, 2015, at 6:39 PM, Fabian Gabel <gabel.fabian at gmail.com>
> wrote:
> > >>>
> > >>> On Mo, 2015-03-02 at 16:29 -0700, Jed Brown wrote:
> > >>>> Fabian Gabel <gabel.fabian at gmail.com> writes:
> > >>>>
> > >>>>> Dear PETSc Team,
> > >>>>>
> > >>>>> I came across the following paragraph in your publication
> "Composable
> > >>>>> Linear Solvers for Multiphysics" (2012):
> > >>>>>
> > >>>>> "Rather than splitting the matrix into large blocks and
> > >>>>> forming a preconditioner from solvers (for example, multi-
> > >>>>> grid) on each block, one can perform multigrid on the entire
> > >>>>> system, basing the smoother on solves coming from the tiny
> > >>>>> blocks coupling the degrees of freedom at a single point (or
> > >>>>> small number of points). This approach is also handled in
> > >>>>> PETSc, but we will not elaborate on it here."
> > >>>>>
> > >>>>> How would I use a multigrid preconditioner (GAMG)
> > >>>>
> > >>>> The heuristics in GAMG are not appropriate for
> indefinite/saddle-point
> > >>>> systems such as arise from Navier-Stokes.  You can use geometric
> > >>>> multigrid and use the fieldsplit techniques described in the paper
> as a
> > >>>> smoother, for example.
> > >>>
> > >>> I sadly don't have a solid background on multigrid methods, but as
> > >>> mentioned in a previous thread
> > >>>
> > >>>
> http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024219.html
> > >>>
> > >>> AMG has apparently been used (successfully?) for fully-coupled
> > >>> finite-volume discretizations of Navier-Stokes:
> > >>>
> > >>> http://dx.doi.org/10.1080/10407790.2014.894448
> > >>> http://dx.doi.org/10.1016/j.jcp.2008.08.027
> > >>>
> > >>> I was hoping to achieve something similar with the right
> configuration
> > >>> of the PETSc preconditioners. So far I have only been using GAMG in a
> > >>> straightforward manner, without providing any details on the
> structure
> > >>> of the linear system. I attached the output of a test run with GAMG.
> > >>>
> > >>>>
> > >>>>> from PETSc on linear systems of the form (after reordering the
> > >>>>> variables):
> > >>>>>
> > >>>>> [A_uu   0     0   A_up  A_uT]
> > >>>>> [0    A_vv    0   A_vp  A_vT]
> > >>>>> [0      0   A_ww  A_up  A_wT]
> > >>>>> [A_pu A_pv  A_pw  A_pp   0  ]
> > >>>>> [A_Tu A_Tv  A_Tw  A_Tp  A_TT]
> > >>>>>
> > >>>>> where each of the block matrices A_ij, with i,j in {u,v,w,p,T},
> results
> > >>>>> directly from a FVM discretization of the incompressible
> Navier-Stokes
> > >>>>> equations and the temperature equation. The fifth row and column
> are
> > >>>>> optional, depending on the method I choose to couple the
> temperature.
> > >>>>> The Matrix is stored as one AIJ Matrix.
> > >>>>>
> > >>>>> Regards,
> > >>>>> Fabian Gabel
> > >>>
> > >>> <cpld_0128.out.578677>
> > >>
> > >
> > > <log.blocksize4><log.txt>
> >
>
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150305/06223a76/attachment.html>


More information about the petsc-users mailing list