[petsc-users] Scalable Solver for Incompressible Flow
Mark Adams
mfadams at lbl.gov
Mon May 8 06:27:32 CDT 2023
On Mon, May 8, 2023 at 2:32 AM Sebastian Blauth <
sebastian.blauth at itwm.fraunhofer.de> wrote:
> Hello everyone,
>
> I wanted to briefly follow up on my question (see my last reply).
> Does anyone know / have an idea why the LSC preconditioner in PETSc does
> not seem to scale well with the problem size (the outer fgmres solver I
> am using nearly scale nearly linearly with the problem size in my example).
> I have also already tried using -ksp_diagonal_scale but the results are
> identical.
> Any help is really appreciated.
>
I would start by finding results in the literature that you like, in that
they are on a problem similar to yours and you like the performance, and
reproduce them in your code.
If you can do that then you have a research problem to see how to get your
problem to work well.
If your solver does not reproduce published results then we might be
able to provide some advice.
Mark
>
> Thanks a lot,
> Sebastian
>
> On 03.05.2023 09:07, Sebastian Blauth wrote:
> > First of all, yes you are correct that I am trying to solve the
> > stationary incompressible Navier Stokes equations.
> >
> > On 02.05.2023 21:33, Matthew Knepley wrote:
> >> On Tue, May 2, 2023 at 2:29 PM Jed Brown <jed at jedbrown.org
> >> <mailto:jed at jedbrown.org>> wrote:
> >>
> >> Sebastian Blauth <sebastian.blauth at itwm.fraunhofer.de
> >> <mailto:sebastian.blauth at itwm.fraunhofer.de>> writes:
> >>
> >> > I agree with your comment for the Stokes equations - for these, I
> >> have
> >> > already tried and used the pressure mass matrix as part of a
> >> (additive)
> >> > block preconditioner and it gave mesh independent results.
> >> >
> >> > However, for the Navier Stokes equations, is the Schur complement
> >> really
> >> > spectrally equivalent to the pressure mass matrix?
> >>
> >> No, it's not. You'd want something like PCD (better, but not
> >> algebraic) or LSC.
> >>
> >
> > I would like to take a look at the LSC preconditioner. For this, I did
> > also not achieve mesh-independent results. I am using the following
> > options (I know that the tolerances are too high at the moment, but it
> > should just illustrate the behavior w.r.t. mesh refinement). Again, I am
> > using a simple 2D channel problem for testing purposes.
> >
> > I am using the following options
> >
> > -ksp_type fgmres
> > -ksp_gmres_restart 100
> > -ksp_gmres_cgs_refinement_type refine_ifneeded
> > -ksp_max_it 1000
> > -ksp_rtol 1e-10
> > -ksp_atol 1e-30
> > -pc_type fieldsplit
> > -pc_fieldsplit_type schur
> > -pc_fieldsplit_schur_fact_type full
> > -pc_fieldsplit_schur_precondition self
> > -fieldsplit_0_ksp_type preonly
> > -fieldsplit_0_pc_type lu
> > -fieldsplit_1_ksp_type gmres
> > -fieldsplit_1_ksp_pc_side right
> > -fieldsplit_1_ksp_gmres_restart 100
> > -fieldsplit_1_ksp_gmres_cgs_refinement_type refine_ifneeded
> > -fieldsplit_1_ksp_max_it 1000
> > -fieldsplit_1_ksp_rtol 1e-10
> > -fieldsplit_1_ksp_atol 1e-30
> > -fieldsplit_1_pc_type lsc
> > -fieldsplit_1_lsc_ksp_type preonly
> > -fieldsplit_1_lsc_pc_type lu
> > -fieldsplit_1_ksp_converged_reason
> >
> > Again, the direct solvers are used so that only the influence of the LSC
> > preconditioner is seen. I have suitable preconditioners for all of these
> > available (using boomeramg).
> >
> > At the bottom, I attach the output for different discretizations. As you
> > can see there, the number of iterations increases nearly linearly with
> > the problem size.
> >
> > I think that the problem could occur due to a wrong scaling. In your
> > docs https://petsc.org/release/manualpages/PC/PCLSC/ , you write that
> > the LSC preconditioner is implemented as
> >
> > inv(S) \approx inv(A10 A01) A10 A00 A01 inv(A10 A01)
> >
> > However, in the book of Elman, Sylvester and Wathen (Finite Elements and
> > Fast Iterative Solvers), the LSC preconditioner is defined as
> >
> > inv(S) \approx inv(A10 inv(T) A01) A10 inv(T) A00 inv(T) A01
> > inv(A10 inv(T) A01)
> >
> > where T = diag(Q) and Q is the velocity mass matrix.
> >
> > There is an options -pc_lsc_scale_diag, which states that it uses the
> > diagonal of A for scaling. I suppose, that this means, that the diagonal
> > of the A00 block is used for scaling - however, this is not the right
> > scaling, is it? Even in the source code for the LSC preconditioner, in
> > /src/ksp/pc/impls/lsc/lsc.c it is mentioned, that a mass matrix should
> > be used...
> > Is there any way to implement this in PETSc? Maybe by supplying the mass
> > matrix as Pmat?
> >
> > Thanks a lot in advance,
> > Sebastian
> >
> >>
> >> I think you can do a better job than that using something like
> >>
> >> https://arxiv.org/abs/1810.03315 <https://arxiv.org/abs/1810.03315>
> >>
> >> Basically, you use an augmented Lagrangian thing to make the Schur
> >> complement well-conditioned,
> >> and then use a special smoother to handle that perturbation.
> >>
> >> > And even if it is, the convergence is only good for small
> >> Reynolds numbers, for moderately high ones the convergence really
> >> deteriorates. This is why I am trying to make
> >> fieldsplit_schur_precondition selfp work better (this is, if I
> >> understand it correctly, a SIMPLE type preconditioner).
> >>
> >> SIMPLE is for short time steps (not too far from resolving CFL) and
> >> bad for steady. This taxonomy is useful, though the problems are
> >> super academic and they don't use high aspect ratio.
> >>
> >
> > Okay, I get that I cannot expect the SIMPLE preconditioning
> > (schur_precondition selfp) to work efficiently. I guess the reason it
> > works for small time steps (for the instationary equations) is that the
> > velocity block becomes diagonally dominant in this case, so that diag(A)
> > is a good approximation of A.
> >
> >
> >> https://doi.org/10.1016/j.jcp.2007.09.026
> >> <https://doi.org/10.1016/j.jcp.2007.09.026>
> >>
> >>
> >> Thanks,
> >>
> >> Matt
> >>
> >> --
> >> What most experimenters take for granted before they begin their
> >> experiments is infinitely more interesting than any results to which
> >> their experiments lead.
> >> -- Norbert Wiener
> >>
> >> https://www.cse.buffalo.edu/~knepley/
> >> <http://www.cse.buffalo.edu/~knepley/>
> >
> >
> > And here is the output of my scaling tests
> >
> > 8x8 discretization
> >
> > Newton solver: iter, abs. residual (abs. tol), rel. residual (rel.
> tol)
> >
> > Newton solver: 0, 1.023e+03 (1.00e-30), 1.000e+00
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 38
> > Newton solver: 1, 1.313e+03 (1.00e-30), 1.283e+00
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 76
> > Newton solver: 2, 1.198e+02 (1.00e-30), 1.171e-01
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 74
> > Newton solver: 3, 7.249e-01 (1.00e-30), 7.084e-04
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 74
> > Newton solver: 4, 3.883e-05 (1.00e-30), 3.795e-08
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 74
> > Newton solver: 5, 2.778e-12 (1.00e-30), 2.714e-15
> (1.00e-10)
> >
> >
> >
> > 16x16 discretization
> >
> > Newton solver: iter, abs. residual (abs. tol), rel. residual (rel.
> tol)
> >
> > Newton solver: 0, 1.113e+03 (1.00e-30), 1.000e+00
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 62
> > Newton solver: 1, 8.316e+02 (1.00e-30), 7.475e-01
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 141
> > Newton solver: 2, 5.806e+01 (1.00e-30), 5.218e-02
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 119
> > Newton solver: 3, 3.309e-01 (1.00e-30), 2.974e-04
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 118
> > Newton solver: 4, 9.085e-06 (1.00e-30), 8.166e-09
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 120
> > Newton solver: 5, 3.475e-12 (1.00e-30), 3.124e-15
> (1.00e-10)
> >
> >
> >
> > 32x32 discretization
> >
> > Newton solver: iter, abs. residual (abs. tol), rel. residual (rel.
> tol)
> >
> > Newton solver: 0, 1.330e+03 (1.00e-30), 1.000e+00
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> 98
> > Newton solver: 1, 5.913e+02 (1.00e-30), 4.445e-01
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 183
> > Newton solver: 2, 3.214e+01 (1.00e-30), 2.416e-02
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 152
> > Newton solver: 3, 2.059e-01 (1.00e-30), 1.547e-04
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 151
> > Newton solver: 4, 6.949e-06 (1.00e-30), 5.223e-09
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 149
> > Newton solver: 5, 5.300e-12 (1.00e-30), 3.983e-15
> (1.00e-10)
> >
> >
> >
> > 64x64 discretization
> >
> > Newton solver: iter, abs. residual (abs. tol), rel. residual (rel.
> tol)
> >
> > Newton solver: 0, 1.707e+03 (1.00e-30), 1.000e+00
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 198
> > Newton solver: 1, 4.259e+02 (1.00e-30), 2.494e-01
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 357
> > Newton solver: 2, 1.706e+01 (1.00e-30), 9.993e-03
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 266
> > Newton solver: 3, 1.134e-01 (1.00e-30), 6.639e-05
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 261
> > Newton solver: 4, 4.285e-06 (1.00e-30), 2.510e-09
> (1.00e-10)
> > Linear fieldsplit_1_ solve converged due to CONVERGED_RTOL iterations
> > 263
> > Newton solver: 5, 9.650e-12 (1.00e-30), 5.652e-15
> (1.00e-10)
> >
>
> --
> Dr. Sebastian Blauth
> Fraunhofer-Institut für
> Techno- und Wirtschaftsmathematik ITWM
> Abteilung Transportvorgänge
> Fraunhofer-Platz 1, 67663 Kaiserslautern
> Telefon: +49 631 31600-4968
> sebastian.blauth at itwm.fraunhofer.de
> www.itwm.fraunhofer.de
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230508/89e224e3/attachment-0001.html>
More information about the petsc-users
mailing list