[petsc-users] GMRES stability
Orxan Shibliyev
orxan.shibli at gmail.com
Mon Mar 9 17:23:10 CDT 2015
I use proactive *point* GS which updates the RHS of neighbours instead of
updating the variable itself in subsequent iterations. I use natural
ordering. I am sure that the matrices are the same for both solvers.
On Sun, Mar 1, 2015 at 4:48 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> Ok, we need to understand what is algorithmically different between what
> your code is doing and PETSc is doing.
>
> From below PETSc is running a symmetric point block SOR with a block
> size of 5. Are you using a point or point block GS? Are you using any
> particular ordering of the unknowns in your GS, or just natural ordering?
> Are you sure the matrices are the same between the two codes? Print them
> out for a tiny grid and compare.
>
> Barry
>
> > On Mar 1, 2015, at 4:33 PM, Orxan Shibliyev <orxan.shibli at gmail.com>
> wrote:
> >
> > I don't run in parallel. I tried -ksp_richardson -pc_type but still
> > cannot get over CFL=5 (with my own GS code CFL=40 was converging).
> > Also, lowering damping factor does not help. Default number of
> > iterations (10000) does not help as well. -ksp_view outputs:
> >
> > KSP Object: 1 MPI processes
> > type: richardson
> > Richardson: damping factor=0.5
> > maximum iterations=10, initial guess is zero
> > tolerances: relative=1e-05, absolute=1e-50, divergence=10000
> > left preconditioning
> > using PRECONDITIONED norm type for convergence test
> > PC Object: 1 MPI processes
> > type: sor
> > SOR: type = local_symmetric, iterations = 1, local iterations = 1,
> omega = 1
> > linear system matrix = precond matrix:
> > Mat Object: 1 MPI processes
> > type: seqbaij
> > rows=13250, cols=13250, bs=5
> > total: nonzeros=261150, allocated nonzeros=1.325e+06
> > total number of mallocs used during MatSetValues calls =0
> > block size is 5
> >
> > On Fri, Feb 27, 2015 at 4:30 AM, Barry Smith <bsmith at mcs.anl.gov> wrote:
> >>
> >> Ok, please provide the rest of the information I asked for.
> >>
> >> Barry
> >>
> >>> On Feb 27, 2015, at 2:33 AM, Orxan Shibliyev <orxan.shibli at gmail.com>
> wrote:
> >>>
> >>> No. It does not converge at all or iow it diverges.
> >>>
> >>> On Thu, Feb 26, 2015 at 9:36 PM, Barry Smith <bsmith at mcs.anl.gov>
> wrote:
> >>>>
> >>>> By stability I assume you mean the the GMRES does not converge (or
> converges too slowly)?
> >>>>
> >>>> The way to improve GMRES convergence is with a preconditioner better
> suited to your problem. By default PETSc uses GMRES with a block Jacobi
> preconditioner with one block per process and ILU(0) on each block. For
> some problems this is fine, but for many problems it will give bad
> convergence.
> >>>>
> >>>> What do you get for -ksp_view (are you using the default?) Are you
> running yet in parallel?
> >>>>
> >>>> As a test on one process you can use GS in PETSc as the
> preconditioner and make sure you get similar convergence to your code. For
> example -ksp_richardson -pc_type sor on one processor will give you a GS
> solver.
> >>>>
> >>>> Once we know a bit more about your problem we can suggest better
> preconditioners.
> >>>>
> >>>> Barry
> >>>>
> >>>>
> >>>>> On Feb 26, 2015, at 10:25 PM, Orxan Shibliyev <
> orxan.shibli at gmail.com> wrote:
> >>>>>
> >>>>> Hi
> >>>>>
> >>>>> I tried to solve Ax=b with my own Gauss-Seidel code and Petsc's
> GMRES.
> >>>>> With my GS, for a steady state problem I can set CFL=40 and for
> >>>>> unsteady case can set dt=0.1. However, for GMRES I can't set CFL more
> >>>>> than 5 and for unsteady case dt more than 0.00001. I need GMRES for
> >>>>> parallel computations so I cannot use GS for this purpose. Is there a
> >>>>> way to improve the stability of GMRES?
> >>>>
> >>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150309/fe453a77/attachment.html>
More information about the petsc-users
mailing list