<br>Thank you both. The solvers are the same, I double checked that. It could be the case that the type of partitioning plays a role here, since i'm indeed using a DA. However, why is it that for example a run on 2 processors the number of iterations is higher than on 8? Both use DA-partitioning in this case. To specify subdomains manually, where do i start? <br>
<br>Leo<br><br><br><div class="gmail_quote">2010/9/23 Jed Brown <span dir="ltr"><<a href="mailto:jed@59a2.org">jed@59a2.org</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<p>The latter is using the partition provided by the DA (or user) which looks to be better than the one computed in the serial run. If you have Parmetis, then it will be used by PCBJACOBI, otherwise the partition is naive. You can specify subdomains manually if you want.</p>
<p>Jed</p><div><div></div><div class="h5">
<p></p><blockquote type="cite">On Sep 23, 2010 1:51 PM, "Leo van Kampenhout" <<a href="mailto:lvankampenhout@gmail.com" target="_blank">lvankampenhout@gmail.com</a>> wrote:<br><br>Hi all,<br><br>With p number of processors in the communicator, the block preconditioner PC_BJACOBI will by default use p blocks. So far, so good. However, in order to compare this algorithmic efficiency decrease (since the bigger p, the less efficient the preconditioner), i ran the commands<br>
<br><span style="font-family: courier new,monospace;">mpirun -n 1 ./program -pc_bjacobi_blocks 8 </span><br><span style="font-family: courier new,monospace;">mpirun -n 8 ./program -pc_bjacobi_blocks 8 </span><br><br>I expected the preconditioning to be equally efficient in this case. However, GMRES makes more iterations in the first case (30 against 28) which I cannot explain. Are there more subtle differences about the preconditioner or the KSP that i'm overlooking here?<br>
<br>regards,<br><font color="#888888"><br>Leo<br><br><br><br><br><br>
</font></blockquote>
</div></div></blockquote></div><br>