[petsc-users] Convergence of transposed linear system.

Gaetan Kenway gaetank at gmail.com
Sun Nov 23 19:54:20 CST 2014


Hi everyone

I have a question relating to preconditioning effectiveness on large
transposed systems. The linear system I'm trying to solve is jacobian
matrix of 3D RANS CFD solver. The bock matrix consists of about 3 million
block rows with a block size of 6: 5 for the inviscid part and 1 for the SA
turbulence model.

The preconditioning matrix is different from the linear system matrix in
two ways: It uses a first order discretization (instead of second order)
and the viscous fluxes are dropped.

The untransposed system converges about 6 orders of magnitude with
GRMES(100), ASM (overlap 1) and ILU(1) with RCM reordering. The test is run
on 128 processors.  There are no convergence difficulties.

However, when I try to solve the transpose of the same system, by either
calling KSPSolveTranspose() or by assembling the transpose of the linear
system and its preconditioner and calling KSPSolve(), GMRES stagnates after
a negligible drop in the residual and no further progress is made.

I have successfully solved this transpose system by using a different
preconditioner that includes the complete linearization of the viscous
terms (~4 times as many non-zeros in PC matrix) and a much much stronger
preconditioner (ASM(2), ILU(2) with 200 GMRES reset.

My question is why does the solution of the transpose system with the same
method perform so terribly? Is it normal that vastly stronger
preconditioning method is required to solve transpose systems?

Any suggestions would be greatly appreciated

Gaetan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20141123/c4155fe9/attachment.html>


More information about the petsc-users mailing list