<div dir="ltr"><div dir="ltr">On Wed, Apr 17, 2019 at 11:59 AM Marian Greg via petsc-users <<a href="mailto:petsc-users@mcs.anl.gov">petsc-users@mcs.anl.gov</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thanks Satish for the reply. However, I also observed the same behavior with gamg and sor preconditioners and ksp_type bcgs as well as gmres. Could you tell which solver and preconditioners would behave same on whatever number of mpi I use?</blockquote><div><br></div><div>1) SOR is parallel will also be Block Jacobi-SOR</div><div><br></div><div>2) Jacobi will be invariant</div><div><br></div><div>3) Chebyshev will be invariant</div><div><br></div><div>4) GAMG will be invariant if you have an elliptic equation. So for instance you can use GAMG on SNES ex5 or ex12 and the iterates will not increase</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Thanks, Mari<br><br><br>On Wednesday, April 17, 2019, Balay, Satish <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Yes - the default preconditioner is block-jacobi - with one block on<br>
each processor.<br>
<br>
So when run on 1 proc vs 8 proc - the preconditioner is different<br>
(with 1block for bjacobi vs 8blocks for bjacobi)- hence difference in<br>
convergence.<br>
<br>
Satish<br>
<br>
On Wed, 17 Apr 2019, Marian Greg via petsc-users wrote:<br>
<br>
> Hi All,<br>
> <br>
> I am facing strange behavior of the ksp solvers with increasing number of<br>
> MPI. The solver is taking more and more iterations with increase in number<br>
> of MPIs. Is that a normal situation? I was expecting to get the same number<br>
> of iteration with whatever number of MPIs I use.<br>
> <br>
> E.g.<br>
> My matrix has about 2 million dofs<br>
> Solving with np 1 takes about 3500 iteration while solving with np 4 takes<br>
> 6500 iterations for the same convergence criteria.<br>
> <br>
> Thanks<br>
> Mari<br>
> <br>
<br>
</blockquote></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>