[petsc-users] Iterative solver and condition number from FDM + fill-in

Matthew Knepley knepley at gmail.com
Fri Sep 21 05:30:00 CDT 2018


On Fri, Sep 21, 2018 at 5:17 AM Appel, Thibaut <t.appel17 at imperial.ac.uk>
wrote:

> Hi Jed,
>
> - It’s incompressible flow but the equations are not singular, we’re using
> a Poisson equation for the pressure.
>

So the only solve is a Poisson solve? This sounds like you have implemented
one of the family of Schur complement
solvers for Stokes-like equations. We have support (PCFIELDSPLIT) for
trying a great range of them. I suggest that
this is your best bet. The idea is to create a list of the velocity and
pressure unknowns and tell the PC, so that it can
split the system into two parts and do intelligent things. This is
described in many of the tutorials online.

Do you mind writing the equations briefly so we can see them?

  Thanks,

    Matt


> - It’s a centered/collocated grid.
> - Complex because we’re seeking solutions under a wave form with complex
> wavenumbers in the exponential, the right hand side is also complex.
>
> Thibaut
>
> > On 21 Sep 2018, at 04:13, Jed Brown <jed at jedbrown.org> wrote:
> >
> > "Appel, Thibaut" <t.appel17 at imperial.ac.uk> writes:
> >
> >> Dear users,
> >>
> >> I’m having trouble finding a PC/KSP pair that works for my problem in
> parallel.
> >> I’m solving linearized Navier-Stokes PDE’s discretized with a finite
> difference method in 2D or 3D in a logically rectangular grid, in complex
> arithmetic.
> >
> > Compressible or incompressible?  Staggered or centered grid?  Why
> complex arithmetic?
> >
> >> It obviously works fine with a direct solver but also with GMRES +
> ILU(3) in sequential.
> >>
> >> I tried different combinations such as
> >> -ksp_type gmres -pc_type asm -sub_pc_type ilu
> >> -ksp_type gmres -pc_type bjacobi -sub_pc_type ilu
> >>
> >> but cannot get the relative residuals below 10^(-2), after 2,000
> iterations - even with increasing the number of ILU fill-in levels (up to
> 5), the number of GMRES restarts (300 to 1000), options such as
> -ksp_initial_guess_nonzero or -ksp_gmres_cgs_refinement_type refine_always.
> -ksp_monitor_true_residual does not seem to give more information?
> >> Maybe there’s room for more experimentation but if you could suggest a
> way to have a better diagnostic?
> >>
> >> With the different equation sets I’m working with, the condition
> numbers estimated with the petsc faq method vary between 10^3 and 10^7.
> >> On top of that I have ridiculous fill-in and have to set
> -pc_factor_fill to 14, up to 35 (!) sometimes.
> >>
> >> For our application we need a lot of discretization points in one
> spatial direction and I read somewhere that condition number scales with
> the square of discretization steps for FD methods. But is there a way to
> reduce it in my case?
> >> I’m also aware that fill-in should be inevitably expected when you have
> a sparse matrix with a banded structure arising from a FDM. But I was
> wondering if there’s something more I can do on the numerical side to, on
> reduce fill-in and/or help the iterative solver to converge faster?
> >>
> >> I know my discretized PDE’s + boundary conditions are scaled
> consistently with regards to matrix entries.
> >> I’m using natural ordering (if my unknowns are a_ij, b_ij the unknown
> vector starts with a_00 b_00 a_10 b_10 a_20 b_20 and ends with a_nxny
> b_nxny…) but I do not think this has any impact?
> >>
> >> Thanks for your support,
> >>
> >>
> >> Thibaut
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180921/a4278cbb/attachment.html>


More information about the petsc-users mailing list