On Thu, Sep 23, 2010 at 10:41 AM, Leo van Kampenhout <span dir="ltr"><<a href="mailto:lvankampenhout@gmail.com">lvankampenhout@gmail.com</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<br>Thank you both. The solvers are the same, I double checked that. It could be the case that the type of partitioning plays a role here, since i'm indeed using a DA. However, why is it that for example a run on 2 processors the number of iterations is higher than on 8? Both use DA-partitioning in this case. To specify subdomains manually, where do i start? <br>
</blockquote><div><br></div><div>It is an open secret that Krylov methods are incredibly sensitive to orderings, especially when combined with</div><div>incomplete factorization preconditioners. Since the ordering depends on the division (see tutorials for a picture</div>
<div>of "petsc" orderings which are just contiguous per process), you can get non-intuitive effects.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Leo<br><br><br><div class="gmail_quote">2010/9/23 Jed Brown <span dir="ltr"><<a href="mailto:jed@59a2.org" target="_blank">jed@59a2.org</a>></span><br><blockquote class="gmail_quote" style="border-left:1px solid rgb(204, 204, 204);margin:0pt 0pt 0pt 0.8ex;padding-left:1ex">
<p>The latter is using the partition provided by the DA (or user) which looks to be better than the one computed in the serial run. If you have Parmetis, then it will be used by PCBJACOBI, otherwise the partition is naive. You can specify subdomains manually if you want.</p>
<p>Jed</p><div><div></div><div>
<p></p><blockquote type="cite">On Sep 23, 2010 1:51 PM, "Leo van Kampenhout" <<a href="mailto:lvankampenhout@gmail.com" target="_blank">lvankampenhout@gmail.com</a>> wrote:<br><br>Hi all,<br><br>With p number of processors in the communicator, the block preconditioner PC_BJACOBI will by default use p blocks. So far, so good. However, in order to compare this algorithmic efficiency decrease (since the bigger p, the less efficient the preconditioner), i ran the commands<br>
<br><span style="font-family:courier new,monospace">mpirun -n 1 ./program -pc_bjacobi_blocks 8 </span><br><span style="font-family:courier new,monospace">mpirun -n 8 ./program -pc_bjacobi_blocks 8 </span><br><br>I expected the preconditioning to be equally efficient in this case. However, GMRES makes more iterations in the first case (30 against 28) which I cannot explain. Are there more subtle differences about the preconditioner or the KSP that i'm overlooking here?<br>
<br>regards,<br><font color="#888888"><br>Leo<br><br><br><br><br><br>
</font></blockquote>
</div></div></blockquote></div><br>
</blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener<br>