[petsc-users] Performance of Fieldsplit PC

Patrick Sanan patrick.sanan at gmail.com
Tue Nov 7 07:54:26 CST 2017


>From what you're describing, it sounds like the solver you're using is
GMRES (if you are using the default), preconditioned with fieldsplit with
nested CG-Jacobi solves. That is, your preconditioner involves inner CG
solves on each field, so is a much "heavier". This seems consistent with
your observation of fewer (outer) Krylov iterations but much more work
being done per iteration.

This should all be visible with -ksp_view.

Do you see what you expect if, instead of CG/Jacobi on each block, you use
Preonly/Jacobi on each block?

On Tue, Nov 7, 2017 at 2:43 PM, Bernardo Rocha <
bernardomartinsrocha at gmail.com> wrote:

> Hello everyone,
>
> I have a general question about the performance of the PCFieldSplit
> that I'm not sure if I understood properly.
>
> Consider a simple Poisson problem discretized by FEM into a system Ax=b
> which is then solved by CG and Jacobi.
>
> Then, I create a "vectorial Poisson" problem by simply adding another block
> of this problem to create a block-like version of it.
> Something like
> [ [A, 0]
>   [0, A]]
> then I create a PCFieldSplit with CG and Jacobi for each block.
>
> Either with additive or multiplicative fieldsplit, the PC is much better
> and solves it
> with fewer iterations than the scalar case. However, the execution time
> taken by
> the PCFieldSplit is much bigger than the simple Jacobi for the scalar case.
>
> (From -log_view I see all the time difference in PCApply)
>
> Why is this happening?
>
> Best regards,
> Bernardo
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20171107/f2025893/attachment.html>


More information about the petsc-users mailing list