<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Mar 2, 2015 at 9:19 PM, Fabian Gabel <span dir="ltr"><<a href="mailto:gabel.fabian@gmail.com" target="_blank">gabel.fabian@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Mo, 2015-03-02 at 19:43 -0600, Barry Smith wrote:<br>
> Do you really want tolerances: relative=1e-90, absolute=1.10423, divergence=10000? That is an absolute tolerance of 1.1? Normally that would be huge.<br>
<br>
I started using atol as convergence criterion with -ksp_norm_type<br>
unpreconditioned. The value of atol gets updated every outer iteration.<br>
<br>
> You can provide your matrix with a block size that GAMG will use with MatSetBlockSize().<br>
<br>
I think something went wrong. Setting the block size to 4 and solving<br>
for (u,v,w,p) the convergence degraded significantly. I attached the<br>
results for a smaller test case that shows the increase of the number of<br>
needed inner iterations when setting the block size via<br>
MatSetBlockSize().<br>
<br>
><br>
> I would use coupledsolve_mg_coarse_sub_pc_type lu it is weird that it is using SOR for 27 points.<br>
><br>
> So you must have provided a null space since it printed "has attached null space"<br>
<br>
The system has indeed a one dimensional null space (from the pressure<br>
equation with Neumann boundary conditions). But now that you mentioned<br>
it: It seems that the outer GMRES doesn't notice that the matrix has an<br>
attached nullspace. Replacing<br>
<br>
CALL MatSetNullSpace(CMAT,NULLSP,IERR)<br>
<br>
with<br>
<br>
CALL KSPSetNullSpace(KRYLOV,NULLSP,IERR)<br>
<br>
solves this. What is wrong with using MatSetNullSpace?<br></blockquote><div><br></div><div>That matrix must not be set as the system matrix for the KSP.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Fabian<br>
<br>
<br>
><br>
> Barry<br>
><br>
><br>
><br>
> > On Mar 2, 2015, at 6:39 PM, Fabian Gabel <<a href="mailto:gabel.fabian@gmail.com">gabel.fabian@gmail.com</a>> wrote:<br>
> ><br>
> > On Mo, 2015-03-02 at 16:29 -0700, Jed Brown wrote:<br>
> >> Fabian Gabel <<a href="mailto:gabel.fabian@gmail.com">gabel.fabian@gmail.com</a>> writes:<br>
> >><br>
> >>> Dear PETSc Team,<br>
> >>><br>
> >>> I came across the following paragraph in your publication "Composable<br>
> >>> Linear Solvers for Multiphysics" (2012):<br>
> >>><br>
> >>> "Rather than splitting the matrix into large blocks and<br>
> >>> forming a preconditioner from solvers (for example, multi-<br>
> >>> grid) on each block, one can perform multigrid on the entire<br>
> >>> system, basing the smoother on solves coming from the tiny<br>
> >>> blocks coupling the degrees of freedom at a single point (or<br>
> >>> small number of points). This approach is also handled in<br>
> >>> PETSc, but we will not elaborate on it here."<br>
> >>><br>
> >>> How would I use a multigrid preconditioner (GAMG)<br>
> >><br>
> >> The heuristics in GAMG are not appropriate for indefinite/saddle-point<br>
> >> systems such as arise from Navier-Stokes. You can use geometric<br>
> >> multigrid and use the fieldsplit techniques described in the paper as a<br>
> >> smoother, for example.<br>
> ><br>
> > I sadly don't have a solid background on multigrid methods, but as<br>
> > mentioned in a previous thread<br>
> ><br>
> > <a href="http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024219.html" target="_blank">http://lists.mcs.anl.gov/pipermail/petsc-users/2015-February/024219.html</a><br>
> ><br>
> > AMG has apparently been used (successfully?) for fully-coupled<br>
> > finite-volume discretizations of Navier-Stokes:<br>
> ><br>
> > <a href="http://dx.doi.org/10.1080/10407790.2014.894448" target="_blank">http://dx.doi.org/10.1080/10407790.2014.894448</a><br>
> > <a href="http://dx.doi.org/10.1016/j.jcp.2008.08.027" target="_blank">http://dx.doi.org/10.1016/j.jcp.2008.08.027</a><br>
> ><br>
> > I was hoping to achieve something similar with the right configuration<br>
> > of the PETSc preconditioners. So far I have only been using GAMG in a<br>
> > straightforward manner, without providing any details on the structure<br>
> > of the linear system. I attached the output of a test run with GAMG.<br>
> ><br>
> >><br>
> >>> from PETSc on linear systems of the form (after reordering the<br>
> >>> variables):<br>
> >>><br>
> >>> [A_uu 0 0 A_up A_uT]<br>
> >>> [0 A_vv 0 A_vp A_vT]<br>
> >>> [0 0 A_ww A_up A_wT]<br>
> >>> [A_pu A_pv A_pw A_pp 0 ]<br>
> >>> [A_Tu A_Tv A_Tw A_Tp A_TT]<br>
> >>><br>
> >>> where each of the block matrices A_ij, with i,j in {u,v,w,p,T}, results<br>
> >>> directly from a FVM discretization of the incompressible Navier-Stokes<br>
> >>> equations and the temperature equation. The fifth row and column are<br>
> >>> optional, depending on the method I choose to couple the temperature.<br>
> >>> The Matrix is stored as one AIJ Matrix.<br>
> >>><br>
> >>> Regards,<br>
> >>> Fabian Gabel<br>
> ><br>
> > <cpld_0128.out.578677><br>
><br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>