[petsc-users] Iterative solver and condition number from FDM + fill-in

Appel, Thibaut t.appel17 at imperial.ac.uk
Fri Sep 21 04:16:51 CDT 2018


Hi Jed,

- It’s incompressible flow but the equations are not singular, we’re using a Poisson equation for the pressure.
- It’s a centered/collocated grid.
- Complex because we’re seeking solutions under a wave form with complex wavenumbers in the exponential, the right hand side is also complex.

Thibaut

> On 21 Sep 2018, at 04:13, Jed Brown <jed at jedbrown.org> wrote:
> 
> "Appel, Thibaut" <t.appel17 at imperial.ac.uk> writes:
> 
>> Dear users,
>> 
>> I’m having trouble finding a PC/KSP pair that works for my problem in parallel.
>> I’m solving linearized Navier-Stokes PDE’s discretized with a finite difference method in 2D or 3D in a logically rectangular grid, in complex arithmetic.
> 
> Compressible or incompressible?  Staggered or centered grid?  Why complex arithmetic?
> 
>> It obviously works fine with a direct solver but also with GMRES + ILU(3) in sequential.
>> 
>> I tried different combinations such as
>> -ksp_type gmres -pc_type asm -sub_pc_type ilu
>> -ksp_type gmres -pc_type bjacobi -sub_pc_type ilu
>>    
>> but cannot get the relative residuals below 10^(-2), after 2,000 iterations - even with increasing the number of ILU fill-in levels (up to 5), the number of GMRES restarts (300 to 1000), options such as -ksp_initial_guess_nonzero or -ksp_gmres_cgs_refinement_type refine_always. -ksp_monitor_true_residual does not seem to give more information?
>> Maybe there’s room for more experimentation but if you could suggest a way to have a better diagnostic? 
>> 
>> With the different equation sets I’m working with, the condition numbers estimated with the petsc faq method vary between 10^3 and 10^7.
>> On top of that I have ridiculous fill-in and have to set -pc_factor_fill to 14, up to 35 (!) sometimes.
>> 
>> For our application we need a lot of discretization points in one spatial direction and I read somewhere that condition number scales with the square of discretization steps for FD methods. But is there a way to reduce it in my case?
>> I’m also aware that fill-in should be inevitably expected when you have a sparse matrix with a banded structure arising from a FDM. But I was wondering if there’s something more I can do on the numerical side to, on reduce fill-in and/or help the iterative solver to converge faster? 
>> 
>> I know my discretized PDE’s + boundary conditions are scaled consistently with regards to matrix entries.
>> I’m using natural ordering (if my unknowns are a_ij, b_ij the unknown vector starts with a_00 b_00 a_10 b_10 a_20 b_20 and ends with a_nxny b_nxny…) but I do not think this has any impact?
>> 
>> Thanks for your support,
>> 
>> 
>> Thibaut


More information about the petsc-users mailing list