[petsc-users] ksppreonly question

Shao-Ching Huang huangsc at gmail.com
Sat Sep 22 00:33:13 CDT 2012


Jed,

Is this equivalent to setting up a SNES where the (constant) Jacobian
is my A1 matrix, similar to ex35.c in the snes directory (despite with
different PC type)?

Thanks.

Shao-Ching

On Fri, Sep 21, 2012 at 4:18 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Fri, Sep 21, 2012 at 6:15 PM, Shao-Ching Huang <huangsc at gmail.com> wrote:
>>
>> I am reading this page,
>>
>> http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPRICHARDSON.html
>> It says:
>>
>> "This method often (usually) will not converge unless scale is very
>> small.
>
>
> That statement applies if you use a general preconditioner that does not
> bound the preconditioned spectrum. With LU, you are fine with a scale of 1
> (or close).
>
>>
>> It is described in "
>>
>> and seems to miss a reference there.
>
>
> The reference is further down the page. Thanks for pointing out the anomaly.
>
>>
>>
>>
>> On Fri, Sep 21, 2012 at 4:07 PM, Shao-Ching Huang <huangsc at gmail.com>
>> wrote:
>> > I will try the Richardson procedure Jed suggested. Thank you!
>> >
>> >
>> > On Fri, Sep 21, 2012 at 3:50 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>> >>
>> >>   Ok, so it is using a FULL LU factorization of A1.  Hence with what I
>> >> outlined below you would use -ksp_type preonly -pc_type lu
>> >>
>> >>    If you reorganize the iteration then in exact arithmetic it is what
>> >> we in PETSc call Richardson's method with preconditioner defined from M (the
>> >> LU of M) so I was wrong; you can do as Jed suggested
>> >> KSPSetOperations(ksp, A,A1,….) and run with -ksp_type richardson to
>> >> mimic the old algorithm. Simply switch to -ksp_type gmres and you have the
>> >> late 80's version of the algorithm
>> >>
>> >>
>> >>
>> >>    Barry
>> >>
>> >>
>> >> On Sep 21, 2012, at 5:30 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>> >>
>> >>>
>> >>> On Sep 21, 2012, at 5:26 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> >>>
>> >>>> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang <huangsc at gmail.com>
>> >>>> wrote:
>> >>>> In this particular finite volume discretization, the flux normal to a
>> >>>> face involves the cell-center values on each side of the face (1),
>> >>>> plus values from neighboring nodes (2) [due to non-orthogonal mesh
>> >>>> cell shape]. The A1 part include coefficients from (1). A2 includes
>> >>>> those in (2).
>> >>>>
>> >>>> 1. Call KSPSetOperators(ksp,A,A1,flag)
>> >>>>
>> >>>> You can make A in the above a MATSHELL that applies A1 + A2
>> >>>> matrix-free (or just the A2 part).
>> >>>>
>> >>>> 2. Use any Krylov method. The specific method -ksp_type richardson
>> >>>> will do the defect-correction version of what you have written, but a real
>> >>>> Krylov method will almost certainly perform much better. Note that A1^{-1}
>> >>>> will be applied using whatever method you choose (via -pc_type). A V-cycle
>> >>>> of algebraic multigrid should work very well.
>> >>>
>> >>>   To mimic the exact old algorithm for comparison purposes
>> >>>  I don't think you can get this directly with KSP you'll need to
>> >>> manage the "outer" iteration yourself, something like
>> >>>
>> >>>     for (n=0; n<Nmax ….
>> >>>          MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2
>> >>> above
>> >>>          KSPSolve(ksp,c,x);
>> >>>    }
>> >>> Your KSP solve could use any solver you like (what does the old code
>> >>> use?, you should use the same thing for comparison purposes)
>> >>>
>> >>> Of course, this is only for comparison purposes, no one in 2012 except
>> >>> in a legacy code would use such a primitive nested solver.
>> >>>
>> >>>  Barry
>> >>>
>> >>
>
>


More information about the petsc-users mailing list