<html><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space; "><div><br class="webkit-block-placeholder"></div> Billy,<div><br class="webkit-block-placeholder"></div><div> By default GMRES and most of the other KSP solvers stop after a reduction</div><div>in the 2-norm of the PRECONDITIONED residual by a factor of 10^-5. See the manual </div><div>page for KSPDefaultConverged() <a href="http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/KSP/KSPDefaultConverged.html">http://www-unix.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/docs/manualpages/KSP/KSPDefaultConverged.html</a> </div><div><br class="webkit-block-placeholder"></div><div> There are a couple of things to consider:</div><div>1) even with the exact same preconditioner (for example Jacobi) the convergence history</div><div>will be slightly different since the computations are done in a different order and so the floating</div><div>point results will be slightly different. The converged SOLUTIONS for a different number of</div><div>processes are ALL correct, even though they have different values since the calculations </div><div>are done in floating point. As you decrease the tolerance factors you will see the SOLUTIONS</div><div>for different number of processes all converge to the same answer (i.e. the solutions will</div><div>share more and more significant digits.)</div><div><br class="webkit-block-placeholder"></div><div>2) Most parallel preconditioners (even in exact precision) are different for a different number of</div><div>processes, for example block Jacobi and the additive Schwarz method. So you get all the</div><div>issues of 1) plus the fact that the convergence histories with different number of processes</div><div>will be different. Again IF the solver is converging than the answers from any number of</div><div>processes are equally correct. Also as you decrease the convergence tolerances you will</div><div>see more and more common significant digits in the different solutions. Sometimes </div><div>with a larger number of processes the preconditioner may stop working and you do not</div><div>get convergence of GMRES and then, of course, the "answer" is garbage. You should always</div><div>call KSPGetConvergedReason() to make sure the solver has converged.</div><div><br class="webkit-block-placeholder"></div><div> Barry</div><div><br class="webkit-block-placeholder"></div><div><br class="webkit-block-placeholder"></div><div><br class="webkit-block-placeholder"></div><div><br class="webkit-block-placeholder"></div><div><div><div>On Dec 29, 2007, at 5:56 PM, Billy Araújo wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"> <div> <!-- Converted from text/plain format --> <br><p><font size="2">Hi,<br> <br> I need to know more about the PETSc parallel GMRES solver. Does the solver maintain the same accuracy independent of the number of processors. For example, if I subdivide a mesh with 1000 unkowns into 10, 100, 1000 processors should I expect to get always the same result? If no, why not? Are there any studies on this?<br> <br> Thank you,<br> <br> Billy.<br> </font> </p> </div> </blockquote></div><br></div></body></html>