[petsc-users] Multigrid preconditioning of entire linear systems for discretized coupled multiphysics problems

Matthew Knepley knepley at gmail.com
Tue Mar 3 07:14:01 CST 2015


On Mon, Mar 2, 2015 at 9:19 PM, Fabian Gabel <gabel.fabian at gmail.com> wrote:

> On Mo, 2015-03-02 at 19:43 -0600, Barry Smith wrote:
> >   Do you really want tolerances:  relative=1e-90, absolute=1.10423,
> divergence=10000? That is an absolute tolerance of 1.1? Normally that would
> be huge.
>
> I started using atol as convergence criterion with -ksp_norm_type
> unpreconditioned. The value of atol gets updated every outer iteration.
>
> >   You can provide your matrix with a block size that GAMG will use with
> MatSetBlockSize().
>
> I think something went wrong. Setting the block size to 4 and solving
> for (u,v,w,p) the convergence degraded significantly. I attached the
> results for a smaller test case that shows the increase of the number of
> needed inner iterations when setting the block size via
> MatSetBlockSize().
>
> >
> >   I would use coupledsolve_mg_coarse_sub_pc_type lu   it is weird that
> it is using SOR for 27 points.
> >
> >  So you must have provided a null space since it printed "has attached
> null space"
>
> The system has indeed a one dimensional null space (from the pressure
> equation with Neumann boundary conditions). But now that you mentioned
> it: It seems that the outer GMRES doesn't notice that the matrix has an
> attached nullspace. Replacing
>
>         CALL MatSetNullSpace(CMAT,NULLSP,IERR)
>
> with
>
>         CALL KSPSetNullSpace(KRYLOV,NULLSP,IERR)
>
> solves this. What is wrong with using MatSetNullSpace?
>

That matrix must not be set as the system matrix for the KSP.

   Matt


> Fabian
>
>
> >
> >   Barry
> >
> >
> >
> > > On Mar 2, 2015, at 6:39 PM, Fabian Gabel <gabel.fabian at gmail.com>
> wrote:
> > >
> > > On Mo, 2015-03-02 at 16:29 -0700, Jed Brown wrote:
> > >> Fabian Gabel <gabel.fabian at gmail.com> writes:
> > >>
> > >>> Dear PETSc Team,
> > >>>
> > >>> I came across the following paragraph in your publication "Composable
> > >>> Linear Solvers for Multiphysics" (2012):
> > >>>
> > >>> "Rather than splitting the matrix into large blocks and
> > >>> forming a preconditioner from solvers (for example, multi-
> > >>> grid) on each block, one can perform multigrid on the entire
> > >>> system, basing the smoother on solves coming from the tiny
> > >>> blocks coupling the degrees of freedom at a single point (or
> > >>> small number of points). This approach is also handled in
> > >>> PETSc, but we will not elaborate on it here."
> > >>>
> > >>> How would I use a multigrid preconditioner (GAMG)
> > >>
> > >> The heuristics in GAMG are not appropriate for indefinite/saddle-point
> > >> systems such as arise from Navier-Stokes.  You can use geometric
> > >> multigrid and use the fieldsplit techniques described in the paper as
> a
> > >> smoother, for example.
> > >
> > > I sadly don't have a solid background on multigrid methods, but as
> > > mentioned in a previous thread
> > >
> > >
> http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024219.html
> > >
> > > AMG has apparently been used (successfully?) for fully-coupled
> > > finite-volume discretizations of Navier-Stokes:
> > >
> > > http://dx.doi.org/10.1080/10407790.2014.894448
> > > http://dx.doi.org/10.1016/j.jcp.2008.08.027
> > >
> > > I was hoping to achieve something similar with the right configuration
> > > of the PETSc preconditioners. So far I have only been using GAMG in a
> > > straightforward manner, without providing any details on the structure
> > > of the linear system. I attached the output of a test run with GAMG.
> > >
> > >>
> > >>> from PETSc on linear systems of the form (after reordering the
> > >>> variables):
> > >>>
> > >>> [A_uu   0     0   A_up  A_uT]
> > >>> [0    A_vv    0   A_vp  A_vT]
> > >>> [0      0   A_ww  A_up  A_wT]
> > >>> [A_pu A_pv  A_pw  A_pp   0  ]
> > >>> [A_Tu A_Tv  A_Tw  A_Tp  A_TT]
> > >>>
> > >>> where each of the block matrices A_ij, with i,j in {u,v,w,p,T},
> results
> > >>> directly from a FVM discretization of the incompressible
> Navier-Stokes
> > >>> equations and the temperature equation. The fifth row and column are
> > >>> optional, depending on the method I choose to couple the temperature.
> > >>> The Matrix is stored as one AIJ Matrix.
> > >>>
> > >>> Regards,
> > >>> Fabian Gabel
> > >
> > > <cpld_0128.out.578677>
> >
>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150303/f6c22273/attachment-0001.html>


More information about the petsc-users mailing list