[petsc-users] Iterative solver behavior with increasing number of mpi

Marian Greg marian.greg.007 at gmail.com
Wed Apr 17 10:57:10 CDT 2019


Thanks Satish for the reply. However, I also observed the same behavior
with gamg and sor preconditioners and ksp_type bcgs as well as gmres. Could
you tell which solver and preconditioners would behave same on whatever
number of mpi I use?

Thanks, Mari


On Wednesday, April 17, 2019, Balay, Satish <balay at mcs.anl.gov> wrote:

> Yes - the default preconditioner is block-jacobi - with one block on
> each processor.
>
> So when run on 1 proc vs 8 proc - the preconditioner is different
> (with 1block for bjacobi vs 8blocks for bjacobi)- hence difference in
> convergence.
>
> Satish
>
> On Wed, 17 Apr 2019, Marian Greg via petsc-users wrote:
>
> > Hi All,
> >
> > I am facing strange behavior of the ksp solvers with increasing number of
> > MPI. The solver is taking more and more iterations with increase in
> number
> > of MPIs. Is that a normal situation? I was expecting to get the same
> number
> > of iteration with whatever number of MPIs I use.
> >
> > E.g.
> > My matrix has about 2 million dofs
> > Solving with np 1 takes about 3500 iteration while solving with np 4
> takes
> > 6500 iterations for the same convergence criteria.
> >
> > Thanks
> > Mari
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190417/e5253a8e/attachment-0001.html>


More information about the petsc-users mailing list