[petsc-users] PCFieldSplit gives different results for direct and iterative solver

Matthew Knepley knepley at gmail.com
Tue Mar 19 07:02:46 CDT 2019


On Tue, Mar 19, 2019 at 7:34 AM Y. Shidi <ys453 at cam.ac.uk> wrote:

> > Perhaps you need a better preconditioner for it.
>
> Hello Matt,
> Thank you for your help.
> Yes, I think I need a better preconditioner;
> it requires about 600 iterations for Schur complement.
> Is there any tutorials for this? Or shall I just
> try different combinations and find the most suitable
> one?
>

For incompressible Stokes with a constant viscosity, the Schur complement
is spectrally
equivalent to the weighted mass matrix

  1/nu M

Using that for the preconditioning matrix is the first step. It is more
complicated for Navier-Stokes,
but starting with a mass matrix can help, especially for slow flows. For
very new approaches to
NS, see something like

    https://arxiv.org/abs/1810.03315

 Thanks,

  Matt


> Kind Regards,
> Shidi
>
> On 2019-03-19 11:17, Matthew Knepley wrote:
> > On Tue, Mar 19, 2019 at 6:59 AM Y. Shidi via petsc-users
> > <petsc-users at mcs.anl.gov> wrote:
> >
> >> Hello Barry,
> >>
> >> Thank you for your reply.
> >>
> >> I reduced the tolerances and get desired solution.
> >>
> >> I am solving a multiphase incompressible n-s problems and currently
> >> we are using augmented lagrangina technique with uzawa iteration.
> >> Because the problems are getting larger, we are also looking for
> >> some
> >> other methods for solving the linear system.
> >> I follow pcfieldsplit tutorial from:
> >>
> > https://www.mcs.anl.gov/petsc/documentation/tutorials/MSITutorial.pdf
> >> [1]
> >>
> >> However, it takes about 10s to finish one iteration and overall
> >> it requires like 150s to complete one time step with 100k unknowns,
> >> which is a long time compared to our current solver 10s for one
> >> time step.
> >
> > The first thing to do is look at how many Schur complement iterations
> > you are doing:
> >
> >   -fieldsplit_pressure_ksp_monitor_true_residual
> >
> > Perhaps you need a better preconditioner for it.
> >
> >   Thanks,
> >
> >     Matt
> >
> >> I tried the following options:
> >> 1).
> >> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur
> >> -pc_fieldsplit_schur_factorization_type lower
> >> -fieldsplit_velocity_ksp_type preonly
> >> -fieldsplit_velocity_pc_type gamg
> >> -fieldsplit_pressure_ksp_type minres
> >> -fieldsplit_pressure_pc_type none
> >> 2).
> >> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur
> >> -pc_fieldsplit_schur_factorization_type diag
> >> -fieldsplit_velocity_ksp_type preonly
> >> -fieldsplit_velocity_pc_type gamg
> >> -fieldsplit_pressure_ksp_type minres
> >> -fieldsplit_pressure_pc_type none
> >> 3).
> >> -ksp_type fgmres -pc_type fieldsplit -pc_fieldsplit_type schur
> >> -pc_fieldsplit_schur_factorization_type full
> >> -fieldsplit_velocity_ksp_type preonly
> >> -fieldsplit_velocity_pc_type lu
> >> -fieldsplit_pressure_ksp_rtol 1e-10 -fieldsplit_pressure_pc_type
> >> jacobi
> >>
> >> So I am wondering if there is any other options that can help
> >> improve
> >> the
> >> pcfieldsplit performance.
> >>
> >> Kind Regards,
> >> Shidi
> >>
> >> On 2019-03-17 00:05, Smith, Barry F. wrote:
> >>>> On Mar 16, 2019, at 6:50 PM, Y. Shidi via petsc-users
> >>>> <petsc-users at mcs.anl.gov> wrote:
> >>>>
> >>>> Hello,
> >>>>
> >>>> I am trying to solve the incompressible n-s equations by
> >>>> PCFieldSplit.
> >>>>
> >>>> The large matrix and vectors are formed by MatCreateNest()
> >>>> and VecCreateNest().
> >>>> The system is solved directly by the following command:
> >>>> -ksp_type fgmres
> >>>> -pc_type fieldsplit
> >>>> -pc_fieldsplit_type schur
> >>>> -pc_fieldsplit_schur_fact_type full
> >>>> -ksp_converged_reason
> >>>> -ksp_monitor_true_residual
> >>>> -fieldsplit_0_ksp_type preonly
> >>>> -fieldsplit_0_pc_type cholesky
> >>>> -fieldsplit_0_pc_factor_mat_solver_package mumps
> >>>> -mat_mumps_icntl_28 2
> >>>> -mat_mumps_icntl_29 2
> >>>> -fieldsplit_1_ksp_type preonly
> >>>> -fieldsplit_1_pc_type jacobi
> >>>> Output:
> >>>> 0 KSP unpreconditioned resid norm 1.214252932161e+04 true resid
> >> norm
> >>>> 1.214252932161e+04 ||r(i)||/||b|| 1.000000000000e+00
> >>>> 1 KSP unpreconditioned resid norm 1.642782495109e-02 true resid
> >> norm
> >>>> 1.642782495109e-02 ||r(i)||/||b|| 1.352916226594e-06
> >>>> Linear solve converged due to CONVERGED_RTOL iterations 1
> >>>>
> >>>> The system is solved iteratively by the following command:
> >>>> -ksp_type fgmres
> >>>> -pc_type fieldsplit
> >>>> -pc_fieldsplit_type schur
> >>>> -pc_fieldsplit_schur_factorization_type diag
> >>>> -ksp_converged_reason
> >>>> -ksp_monitor_true_residual
> >>>> -fieldsplit_0_ksp_type preonly
> >>>> -fieldsplit_0_pc_type gamg
> >>>> -fieldsplit_1_ksp_type minres
> >>>> -fieldsplit_1_pc_type none
> >>>> Output:
> >>>> 0 KSP unpreconditioned resid norm 1.214252932161e+04 true resid
> >> norm
> >>>> 1.214252932161e+04 ||r(i)||/||b|| 1.000000000000e+00
> >>>> 1 KSP unpreconditioned resid norm 2.184037364915e+02 true resid
> >> norm
> >>>> 2.184037364915e+02 ||r(i)||/||b|| 1.798667565109e-02
> >>>> 2 KSP unpreconditioned resid norm 2.120097409539e+02 true resid
> >> norm
> >>>> 2.120097409635e+02 ||r(i)||/||b|| 1.746009709742e-02
> >>>> 3 KSP unpreconditioned resid norm 4.364091658268e+01 true resid
> >> norm
> >>>> 4.364091658575e+01 ||r(i)||/||b|| 3.594054865332e-03
> >>>> 4 KSP unpreconditioned resid norm 2.632671796885e+00 true resid
> >> norm
> >>>> 2.632671797020e+00 ||r(i)||/||b|| 2.168141189773e-04
> >>>> 5 KSP unpreconditioned resid norm 2.209213998004e+00 true resid
> >> norm
> >>>> 2.209213980361e+00 ||r(i)||/||b|| 1.819401808180e-04
> >>>> 6 KSP unpreconditioned resid norm 4.683775185840e-01 true resid
> >> norm
> >>>> 4.683775085753e-01 ||r(i)||/||b|| 3.857330677735e-05
> >>>> 7 KSP unpreconditioned resid norm 3.042503284736e-02 true resid
> >> norm
> >>>> 3.042503349258e-02 ||r(i)||/||b|| 2.505658638883e-06
> >>>>
> >>>>
> >>>> Both methods give answers, but they are different
> >>>
> >>> What do you mean the answers are different? Do you mean the
> >>> solution x from KSPSolve() is different? How are you calculating
> >> their
> >>> difference and how different are they?
> >>>
> >>> Since the solutions are only approximate; true residual norm
> >> is
> >>> around 1.642782495109e-02 and 3.042503349258e-02 for the two
> >>> different solvers there will only be a certain number of
> >> identical
> >>> digits in the two solutions (which depends on the condition
> >> number of
> >>> the original matrix). You can run both solvers with -ksp_rtol
> >> 1.e-12
> >>> and then (assuming everything is working correctly) the two
> >> solutions
> >>> will be much closer to each other.
> >>>
> >>> Barry
> >>>
> >>>> so I am wondering
> >>>> if it is possible that you can help me figure out which part I
> >> am
> >>>> doing wrong.
> >>>>
> >>>> Thank you for your time.
> >>>>
> >>>> Kind Regards,
> >>>> Shidi
> >
> > --
> >
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> > their experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/ [2]
> >
> >
> > Links:
> > ------
> > [1]
> > https://www.mcs.anl.gov/petsc/documentation/tutorials/MSITutorial.pdf
> > [2] http://www.cse.buffalo.edu/~knepley/
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190319/ddce6e89/attachment-0001.html>


More information about the petsc-users mailing list