<p>The latter is using the partition provided by the DA (or user) which looks to be better than the one computed in the serial run. If you have Parmetis, then it will be used by PCBJACOBI, otherwise the partition is naive. You can specify subdomains manually if you want.</p>
<p>Jed</p>
<p><blockquote type="cite">On Sep 23, 2010 1:51 PM, "Leo van Kampenhout" <<a href="mailto:lvankampenhout@gmail.com">lvankampenhout@gmail.com</a>> wrote:<br><br>Hi all,<br><br>With p number of processors in the communicator, the block preconditioner PC_BJACOBI will by default use p blocks. So far, so good. However, in order to compare this algorithmic efficiency decrease (since the bigger p, the less efficient the preconditioner), i ran the commands<br>
<br><span style="font-family:courier new,monospace">mpirun -n 1 ./program -pc_bjacobi_blocks 8 </span><br><span style="font-family:courier new,monospace">mpirun -n 8 ./program -pc_bjacobi_blocks 8 </span><br><br>I expected the preconditioning to be equally efficient in this case. However, GMRES makes more iterations in the first case (30 against 28) which I cannot explain. Are there more subtle differences about the preconditioner or the KSP that i'm overlooking here?<br>
<br>regards,<br><font color="#888888"><br>Leo<br><br><br><br><br><br>
</font></blockquote></p>