[petsc-users] question about PC_BJACOBI

Matthew Knepley knepley at gmail.com
Thu Sep 23 06:55:59 CDT 2010


Exact numbers of iterations are sensitive to details of computations, so you
need to check

  a) that the solvers are exactly the same using -ksp_view

  b) the convergence history, which can be different in parallel due to
non-commutativity of floating point arithmetic

   Matt

On Thu, Sep 23, 2010 at 7:51 AM, Leo van Kampenhout <
lvankampenhout at gmail.com> wrote:

> Hi all,
>
> With p number of processors in the communicator, the block preconditioner
> PC_BJACOBI will by default use p blocks. So far, so good. However, in order
> to compare this algorithmic efficiency decrease (since the bigger p, the
> less efficient the preconditioner), i ran the commands
>
> mpirun -n 1 ./program -pc_bjacobi_blocks 8
> mpirun -n 8 ./program -pc_bjacobi_blocks 8
>
> I expected the preconditioning to be equally efficient in this case.
> However, GMRES makes more iterations in the first case (30 against 28) which
> I cannot explain. Are there more subtle differences about the preconditioner
> or the KSP that i'm overlooking here?
>
> regards,
>
> Leo
>
>
>
>
>
>


-- 
What most experimenters take for granted before they begin their experiments
is infinitely more interesting than any results to which their experiments
lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20100923/a86a3fa4/attachment.htm>


More information about the petsc-users mailing list