<div dir="ltr">DO you mean the option pc_fieldsplit_block_size? In this thread:<div><br></div><div><a href="http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error">http://petsc-users.mcs.anl.narkive.com/qSHIOFhh/fieldsplit-error</a><br></div><div><br></div><div>It assumes you have a constant number of fields at each grid point, am I right? However, my field split is not constant, like</div><div>[u1_x u1_y u1_z p_1 u2_x u2_y u2_z u3_x u3_y u3_z p_3 u4_x u4_y u4_z]</div><div><br></div><div>Subsequently the fieldsplit is</div><div>[u1_x u1_y u1_z u2_x u2_y u2_z u3_x u3_y u3_z u4_x u4_y u4_z]<br></div><div>[p_1 p_3]</div><div><br></div><div>Then what is the option to set block size 3 for split 0?</div><div><br></div><div><div><div>Sorry, I search several forum threads but cannot figure out the options as you said.</div><div><br></div></div></div><div><br class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class=""><div><br></div></span><div>You can still do that. It can be done with options once the decomposition is working. Its true that these solvers</div><div>work better with the block size set. However, if its the P2 Laplacian it does not really matter since its uncoupled.</div><span class=""><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"></div></blockquote></span></div></div></div></blockquote></div><div>Yes, I agree it's uncoupled with the other field, but the crucial factor defining the quality of the block preconditioner is the approximate inversion of individual block. I would merely try block Jacobi first, because it's quite simple. Nevertheless, fieldsplit implements other nice things, like Schur complement, etc.</div><div><br></div><div><br></div><div class="gmail_extra"><div><div class="gmail_signature"><div dir="ltr">Giang</div><div dir="ltr"><br></div><div dir="ltr"><br></div></div></div>
<br><div class="gmail_quote">On Fri, Jan 22, 2016 at 11:15 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="">On Fri, Jan 22, 2016 at 3:40 AM, Hoang Giang Bui <span dir="ltr"><<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr">Hi Matt<div>I would rather like to set the block size for block P2 too. Why?</div><div><br></div><div>Because in one of my test (for problem involves only [u_x u_y u_z]), the gmres + Hypre AMG converges in 50 steps with block size 3, whereby it increases to 140 if block size is 1 (see attached files).</div></div></blockquote><div><br></div></span><div>You can still do that. It can be done with options once the decomposition is working. Its true that these solvers</div><div>work better with the block size set. However, if its the P2 Laplacian it does not really matter since its uncoupled.</div><span class=""><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div>This gives me the impression that AMG will give better inversion for "P2" block if I can set its block size to 3. Of course it's still an hypothesis but worth to try.</div><div><br></div><div>Another question: In one of the Petsc presentation, you said the Hypre AMG does not scale well, because set up cost amortize the iterations. How is it quantified? and what is the memory overhead?</div></div></blockquote><div><br></div></span><div>I said the Hypre setup cost is not scalable, but it can be amortized over the iterations. You can quantify this</div><div>just by looking at the PCSetUp time as your increase the number of processes. I don't think they have a good</div><div>model for the memory usage, and if they do, I do not know what it is. However, generally Hypre takes more</div><div>memory than the agglomeration MG like ML or GAMG.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><span class=""><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">Giang</div></div></div>
<br><div class="gmail_quote">On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span>Hoang Giang Bui <<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>> writes:<br>
<br>
</span><span>> Why P2/P2 is not for co-located discretization?<br>
<br>
</span>Matt typed "P2/P2" when me meant "P2/P1".<br>
</blockquote></div><br></div></div>
</blockquote></span></div><br><br clear="all"><span class=""><div><br></div>-- <br><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</span></div></div>
</blockquote></div><br></div></div>