[petsc-users] Many issues solving matrix derived from a CFD coupled Solver

Jed Brown jed at 59A2.org
Mon May 31 06:18:47 CDT 2010


On Mon, 31 May 2010 12:27:33 +0200, Luca Mangani <luca.mangani at hslu.ch> wrote:
> Hi Jed,
> 
> first of all thank you for your reply and your support.
> 
> Also in these days a tried different configurations but without any success.
> 
> > Once you have decided on a discretization and algorithm from the
> > literature that you would like to implement (or your own variant), we
> > can give suggestions.
> My solver is based on finite volume discretization with colocated 
> arrangement for the calculated variables. (I have a paper as reference 
> and if you wnat I can send you).

I suggest finding a preconditioner in the literature that was
demonstrated to work for a similar discretization.

> The dicretized equations are based on incompressible NS equations in 
> pressure based form. The matrix is built as discretization of the 
> velocity and pressure equation solved in a coupled way. The matrix is a 
> sparse matrix of a dimension 4*Ncells, and the diagonal is always filled 
> (no saddle point).

Your matrix has (after suitable permutation), the form

  J = [A B; C D]

with A nonsymmetric definite (the symmetric part is positive
semidefinite) and D negative semidefinite (and probably symmetric).
This is an indefinite system and usually referred to more specifically
as a (generalized) saddle point problem; see Benzi, Golub & Liesen 2005
if you don't believe me.

> The direct solver works well but there isn't any chance to have
> convergence for iterative solvers. Chosing bcgs+blockJacobi as PC I
> get solution but only for very small cases ( Ncells=5000). Mainly AMG
> solvers should be the best solution looking to the other works, but it
> does not work at the moment.

AMG will definitely not work if the block structure is not respected.  I
recommend ordering unknowns so that all 4 dofs at each cell come
together [u0,v0,w0,p0,u1,v1,w1,p1,...] and call MatSetBlockSize(A,4)
(you could also use BAIJ).  Then try something like

  -pc_type ml -pc_ml_BlockScaling -pc_ml_EnergyMinimization 2

and something similar with BoomerAMG.  Also let us know if

  -pc_type asm -sub_pc_type lu

works in parallel.  If this doesn't work, you will have to follow up on
some of my suggestions.  Iterative solvers aren't magic and black-box
solvers rarely solve interesting problems.

Note: I would always use GMRES with a large subspace at this stage.

Jed


More information about the petsc-users mailing list