<div class="gmail_quote">On Thu, Mar 8, 2012 at 12:57, Abdul Hanan Sheikh <span dir="ltr"><<a href="mailto:hanangul12@yahoo.co.uk">hanangul12@yahoo.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><span>I dont want to try snes, since my problem is linear,</span></div></blockquote><div><br></div><div>Note that you can still use SNES for linear problems. (I prefer the interface, you can choose -snes_type ksponly so there is no extra overhead, but use whatever you like.)</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><span> but I successfully applied the KSP</span></div><div><span>preconditioned with PCMG (not kaskade, but multiplicative; for some reasons) .</span></div>
<div><span>I make pre-smoother dummy by setting KSP_PRE context as PREONLY alongwith PCNONE. <br></span></div></blockquote><div><br></div><div>You can just set the number of pre-smoothing steps to 0.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><span></span></div><div><span>CGC and post-smoother serves my purpose to apply preconditioner</span></div><div><span>Prec = </span><span>I - A*P*(A_H)^-1 * R with KSP_POST as RICHARDSON alongwith PCNONE. <br></span></div>
</blockquote><div><br></div><div>Read my last message again. You either want the additive</div><div><br></div><div>Prec = I + C</div><div><br></div><div>(where C = P*A_H^{-1}*R is the coarse solve) or the multiplicative</div>
<div><br></div><div>Prec = C + S (I - A C)</div><div><br></div><div>perhaps with S = I (or some scaling factor). I don't think your definition of Prec makes sense or is the operation that you intend to be applying.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div><span></span></div><div><span>I hope PCNONE make smoother S = I in the framework you suggested earlier i.e. <br>
</span></div><div>C b + S (b - A C b)~~~~~~ where C is read as CGC. <br></div><div>It seems working well with levels 2, but
when I force RICHARDSON_max_it 0. To my understanding, it should work with RICHARDSON_max_it = 1; </div></blockquote><div><br></div><div>Read through my last message again. I explained that this does not work and that it is caused by the destabilizing effect of the extra multiply without a smoother. You can fix it for easy problems with -mg_levels_ksp_richardson_self_scale.</div>
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>What if I want to approximate my all coarse matrices with any Krylov iteration ?</div></blockquote><div>
<br></div><div>The methods on each level are independent, you can set them with -mg_coarse_ksp_type gmres -mg_levels_1_ksp_type cg -mg_levels_1_ksp_max_it 100 -mg_levels_2_ksp_type minres ... </div></div><br>