[petsc-users] Iterative solver behavior with increasing number of mpi
Balay, Satish
balay at mcs.anl.gov
Wed Apr 17 10:39:05 CDT 2019
Yes - the default preconditioner is block-jacobi - with one block on
each processor.
So when run on 1 proc vs 8 proc - the preconditioner is different
(with 1block for bjacobi vs 8blocks for bjacobi)- hence difference in
convergence.
Satish
On Wed, 17 Apr 2019, Marian Greg via petsc-users wrote:
> Hi All,
>
> I am facing strange behavior of the ksp solvers with increasing number of
> MPI. The solver is taking more and more iterations with increase in number
> of MPIs. Is that a normal situation? I was expecting to get the same number
> of iteration with whatever number of MPIs I use.
>
> E.g.
> My matrix has about 2 million dofs
> Solving with np 1 takes about 3500 iteration while solving with np 4 takes
> 6500 iterations for the same convergence criteria.
>
> Thanks
> Mari
>
More information about the petsc-users
mailing list