[petsc-users] question about PC_BJACOBI

Leo van Kampenhout lvankampenhout at gmail.com
Thu Sep 23 09:41:04 CDT 2010


Thank you both. The solvers are the same, I double checked that. It could be
the case that the type of partitioning plays a role here, since i'm indeed
using a DA. However, why is it that for example a run on 2 processors the
number of iterations is higher than on 8? Both use DA-partitioning in this
case. To specify subdomains manually, where do i start?

Leo


2010/9/23 Jed Brown <jed at 59a2.org>

> The latter is using the partition provided by the DA (or user) which looks
> to be better than the one computed in the serial run. If you have Parmetis,
> then it will be used by PCBJACOBI, otherwise the partition is naive. You can
> specify subdomains manually if you want.
>
> Jed
>
> On Sep 23, 2010 1:51 PM, "Leo van Kampenhout" <lvankampenhout at gmail.com>
> wrote:
>
> Hi all,
>
> With p number of processors in the communicator, the block preconditioner
> PC_BJACOBI will by default use p blocks. So far, so good. However, in order
> to compare this algorithmic efficiency decrease (since the bigger p, the
> less efficient the preconditioner), i ran the commands
>
> mpirun -n 1 ./program -pc_bjacobi_blocks 8
> mpirun -n 8 ./program -pc_bjacobi_blocks 8
>
> I expected the preconditioning to be equally efficient in this case.
> However, GMRES makes more iterations in the first case (30 against 28) which
> I cannot explain. Are there more subtle differences about the preconditioner
> or the KSP that i'm overlooking here?
>
> regards,
>
> Leo
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100923/3ba0a665/attachment.htm>


More information about the petsc-users mailing list