On Thu, Oct 8, 2009 at 10:14 AM, Matt Funk <span dir="ltr"><<a href="mailto:mafunk@nmsu.edu">mafunk@nmsu.edu</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div style="font-family: 'Arial'; font-size: 12pt; font-weight: 400; font-style: normal;">Hi Matt,<br>
<p style="margin: 0px; text-indent: 0px;"><br></p>i don't understand how it can be the condition of the system.<br>
After all, the matrix as well as the RHS vector is EXACTLY the same between the 2 runs. This is what puzzles me so much.<br>
</div></blockquote><div><br>The order of operations is different in the serial and parallel cases. With a very ill-conditioned matrix,<br>which it sounds like you have, the reordering produces noticeably different residuals, even though<br>
they both satisfy the error bound.<br><br> Matt<br> </div><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;"><div style="font-family: 'Arial'; font-size: 12pt; font-weight: 400; font-style: normal;">
The only difference is whether i solve it using 1 core vs 4 cores.<br>
<p style="margin: 0px; text-indent: 0px;"><br></p>Of course i could be missing something.<br><font color="#888888">
<p style="margin: 0px; text-indent: 0px;"><br></p><p style="margin: 0px; text-indent: 0px;"><br></p>matt</font><div><div></div><div class="h5"><br>
<p style="margin: 0px; text-indent: 0px;"><br></p><p style="margin: 0px; text-indent: 0px;"><br></p><p style="margin: 0px; text-indent: 0px;"><br></p>On Wednesday 07 October 2009, Matthew Knepley wrote:<br>
> This sounds like it has to do with the condition of your system, not any<br>
> parallel problem. Errors<br>
> in the solution can only be reduced to about (condition number) * (residual<br>
> norm).<br>
><br>
> Matt<br>
><br>
> On Wed, Oct 7, 2009 at 5:26 PM, Matt Funk <<a href="mailto:mafunk@nmsu.edu" target="_blank">mafunk@nmsu.edu</a>> wrote:<br>
> > Hi,<br>
> ><br>
> ><br>
> > i have a problem for which i am not exaclty sure about what to do.<br>
> > I set up a simple 2D rectangular domain and decompose it into four equal<br>
> > boxes. I then build the petsc matrix based on this layout as well as the<br>
> > corresponsing RHS vector.<br>
> ><br>
> ><br>
> > I print out the matrix and RHS vector right before my KSPSolve call, and<br>
> > right after that call i print out the solution vector 'x'.<br>
> ><br>
> ><br>
> > I do this for 2 runs.<br>
> > 1) 1 processor<br>
> > 2) 4 processors.<br>
> ><br>
> ><br>
> > For both runs i do a difference (i.e. on the output files using diff) on<br>
> > all 3 quantities (the matrix, the RHS vector and the solution vector).<br>
> ><br>
> ><br>
> > The 'diff' command reports no difference between the files for the matrix<br>
> > and RHS vector.<br>
> ><br>
> ><br>
> > However, the soltution vector is different between the 2 runs. How<br>
> > different depends a little on what precond/solver combination i use and<br>
> > the tolerances.<br>
> ><br>
> ><br>
> > However, for example for BJacobi/GMRES with reltol=abstol=1e-12 the<br>
> > vector element with the maximum difference is on the order 1e-05. This is<br>
> > only after the first timestep. My problem has some nonlinearlity to it<br>
> > such that this will become a problem later on.<br>
> ><br>
> ><br>
> > The worst difference i have seen is if i use hypre's euclid. It was on<br>
> > the order of 1e-02.<br>
> ><br>
> ><br>
> ><br>
> > So my question is whether someone has an idea why this is happening (i<br>
> > suppose it is related to the parallel communication) and if there is way<br>
> > to fix it.<br>
> ><br>
> ><br>
> ><br>
> > thanks<br>
> > matt<br>
<p style="margin: 0px; text-indent: 0px;"><br></p><p style="margin: 0px; text-indent: 0px;"><br></p></div></div></div></blockquote></div><br><br clear="all"><br>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>