[petsc-users] How to improve this solver for Navier-Stokes equation (based on fieldsplit and lsc)

Matthew Knepley knepley at gmail.com
Thu Jun 14 08:06:48 CDT 2012


On Thu, Jun 14, 2012 at 9:01 PM, Thomas Witkowski <
thomas.witkowski at tu-dresden.de> wrote:

> I try to implement an efficient solver for my FEM based unsteady Navier
> Stokes code. The scenario I consider is realy simple: 2D FEM with
> Taylor-Hood element, equi-spaced grid, simple channel flow with prescribed
> inflow boundaries for the velocity, no boundary conditions for the
> pressure. Because I want to


With no pressure BC, you need to specify the null space. Look at SNES ex62.


> solve unsteady flows, at each timepoint an Ossen problem is solved
> (implicit Euler time discretization, "trivial" linearization). For using
> PETSc, I created a fieldsplit preconditioner, that is configured in the
> following way:
>
> mpirun -np 2 ./navier_stokes init/channel_parallel.dat.2d \
> -ksp_type fgmres \
> -pc_type fieldsplit \
> -pc_fieldsplit_type SCHUR \
> -pc_fieldsplit_schur_fact_type FULL \
> -ksp_converged_reason \
> -ksp_monitor_true_residual \
> -fieldsplit_velocity_ksp_type preonly \
> -fieldsplit_velocity_pc_type lu \
> -fieldsplit_velocity_pc_**factor_mat_solver_package mumps \
> -fieldsplit_pressure_ksp_type gmres \
> -fieldsplit_pressure_ksp_max_**it 10 \
> -fieldsplit_pressure_ksp_rtol 1.0e-2 \
> -fieldsplit_pressure_pc_type lsc \
>

This makes no sense unless you specify an auxiliary operator. Just leave it
at jacobi. When you use LU
for velocity, it will converge in 1 iterate. Since it doesn't, it means you
have a problem with the null space.

   Matt


> -fieldsplit_pressure_lsc_ksp_**type bcgs \
> -fieldsplit_pressure_lsc_ksp_**max_it 10 \
> -fieldsplit_pressure_lsc_ksp_**rtol 1.0e-2 \
> -fieldsplit_pressure_lsc_pc_**type hypre
>
> The direct solver for the velocity part is just for debugging and should
> be replaced when everything else works fine. The point is, that I found
> this constellation to be not very efficient. It takes around 20 to 30
> iterations, which takes around 30 seconds on a very small problem size
> (around 20000 global unknows for each velocity component and 5000 global
> unknowns for the pressure) on a very fast desktop CPU (some new Intel Xeno
> with 4 core). Any hints for improvements?
>
> Thomas
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20120614/889d7f34/attachment.html>


More information about the petsc-users mailing list