[petsc-users] ksppreonly question
Shao-Ching Huang
huangsc at gmail.com
Fri Sep 21 18:15:37 CDT 2012
I am reading this page,
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/KSP/KSPRICHARDSON.html
It says:
"This method often (usually) will not converge unless scale is very
small. It is described in "
and seems to miss a reference there.
On Fri, Sep 21, 2012 at 4:07 PM, Shao-Ching Huang <huangsc at gmail.com> wrote:
> I will try the Richardson procedure Jed suggested. Thank you!
>
>
> On Fri, Sep 21, 2012 at 3:50 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>> Ok, so it is using a FULL LU factorization of A1. Hence with what I outlined below you would use -ksp_type preonly -pc_type lu
>>
>> If you reorganize the iteration then in exact arithmetic it is what we in PETSc call Richardson's method with preconditioner defined from M (the LU of M) so I was wrong; you can do as Jed suggested
>> KSPSetOperations(ksp, A,A1,….) and run with -ksp_type richardson to mimic the old algorithm. Simply switch to -ksp_type gmres and you have the late 80's version of the algorithm
>>
>>
>>
>> Barry
>>
>>
>> On Sep 21, 2012, at 5:30 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>
>>>
>>> On Sep 21, 2012, at 5:26 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>>>
>>>> On Fri, Sep 21, 2012 at 5:21 PM, Shao-Ching Huang <huangsc at gmail.com> wrote:
>>>> In this particular finite volume discretization, the flux normal to a
>>>> face involves the cell-center values on each side of the face (1),
>>>> plus values from neighboring nodes (2) [due to non-orthogonal mesh
>>>> cell shape]. The A1 part include coefficients from (1). A2 includes
>>>> those in (2).
>>>>
>>>> 1. Call KSPSetOperators(ksp,A,A1,flag)
>>>>
>>>> You can make A in the above a MATSHELL that applies A1 + A2 matrix-free (or just the A2 part).
>>>>
>>>> 2. Use any Krylov method. The specific method -ksp_type richardson will do the defect-correction version of what you have written, but a real Krylov method will almost certainly perform much better. Note that A1^{-1} will be applied using whatever method you choose (via -pc_type). A V-cycle of algebraic multigrid should work very well.
>>>
>>> To mimic the exact old algorithm for comparison purposes
>>> I don't think you can get this directly with KSP you'll need to manage the "outer" iteration yourself, something like
>>>
>>> for (n=0; n<Nmax ….
>>> MatMultAdd(A2,x,b,c) where A2 is the opposite sign of your A2 above
>>> KSPSolve(ksp,c,x);
>>> }
>>> Your KSP solve could use any solver you like (what does the old code use?, you should use the same thing for comparison purposes)
>>>
>>> Of course, this is only for comparison purposes, no one in 2012 except in a legacy code would use such a primitive nested solver.
>>>
>>> Barry
>>>
>>
More information about the petsc-users
mailing list