[petsc-users] Many issues solving matrix derived from a CFD coupled Solver
Jed Brown
jed at 59A2.org
Mon May 31 08:41:23 CDT 2010
On Mon, 31 May 2010 14:03:13 +0200, Luca Mangani <luca.mangani at hslu.ch> wrote:
> Dear Jed,
>
> > The dicretized equations are based on incompressible NS equations in
> > pressure based form. The matrix is built as discretization of the
> > velocity and pressure equation solved in a coupled way. The matrix is a
> > sparse matrix of a dimension 4*Ncells, and the diagonal is always filled
> > (no saddle point).
>
> I solve for the pressure the pressure-equation not continuity, so should
> I have no saddle point problem.
What is sitting in the pressure-pressure block (D as described in my
last message)? Usually this is a stabilization term and does not make
the system positive definite. It is possible to have a generalized
saddle point form without a Lagrangian.
> > AMG will definitely not work if the block structure is not respected. I
> > recommend ordering unknowns so that all 4 dofs at each cell come
> > together [u0,v0,w0,p0,u1,v1,w1,p1,...]
> The pattern and the matrix is arranged exactly in that way
> ([u0,v0,w0,p0,u1,v1,w1,p1,...] ) and I received no benefit again with
> the options you suggest (ml multigrid).
This is not surprising since this approach is quite finicky and very
sensitive to the discretization and stabilization. You can try adding
(with ML) options like
-pc_ml_maxNlevels 2 -mg_levels_pc_type ilu -mg_levels_ksp_type gmres -ksp_type fgmres
but I'm not especially optimistic about this.
> For the moment the solver is serial. So I cannot do your try, (sorry for
> that).
You can run ASM in serial with -pc_type asm -pc_asm_blocks 8
-sub_pc_type lu. You could also save the system with -ksp_view_binary
and load it in parallel with src/ksp/ksp/examples/tutorials/ex10.
> I've noticed also that solving the same coupled matrix with the same
> solver type, (same tolerance and PC (GMRES + ILU)) with MATLAB and
> comparing, PETSC is slower
Matlab is probably pivoting which helps for robustness, but isn't fast
or bandwidth friendly, -pc_type hypre -pc_hypre_type euclid is a
different ILU that you could try. If ILU is going to be efficient, the
pressure dofs probably need to come last (on each subdomain).
> and does not converge also with the optimized compiled version, maybe this should be useful.
The convergence behavior is different for a PETSC_ARCH built
--with-debugging=0 than --with-debugging=1 (default)? If so, please
make sure the matrix is actually the same in both cases, save the system
with -ksp_view_binary (presumably this occurs for a modest problem
size). Send configure.log for both cases, the options you were using,
the symptoms you observed, and a link to your saved system to
petsc-maint at mcs.anl.gov.
Jed
More information about the petsc-users
mailing list