<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
Please find the attached plots where I've compared results obtained from 2 processors (2.pdf) numerical simulation in PETSc with analytical result and they agree fine, but for 6 processors (6.pdf) simulation, the numerical result is a complete garbage.<br><br>My major confusion is this : why does the default GMRES *works* for 2 processors and not for 6 processors? GMRES algorithm is definitely not the problem here because otherwise it would not give me correct result when I run my simulation on 2 processors. Please note that I *do not* change anything in my code other than the number of processors with this command at run time :<br>mpiexec -n 2 test3 <br><br>Also, I make sure that I write the output to a separate file. It is very hard for me to believe when you say :<br>"(a) the problem you are building is not actually the same or the results
are being misinterpreted or "<br><br>It is the same. Like I said above, I am only changing the number of processors at run time.<br><br>"(b) the default methods are not working
well and needs some algorithmic adjustments for your problem."<br><br>if this were an issue, then why does it work for two processors?<br><br><br>The problem I am simulating is a vector wave equation in two dimensions (or in other words, Maxwell's equations). The coefficient matrix is assembled using finite difference frequency domain method on a Yee grid. <br><br><br>I will appreciate any help in the regard.<br><br>Thanks.<br>-Amrit <br><br><br><div><hr id="stopSpelling">Date: Thu, 8 Sep 2011 20:07:04 +0200<br>From: jedbrown@mcs.anl.gov<br>To: petsc-users@mcs.anl.gov<br>Subject: Re: [petsc-users] Fwd: nonzero prescribed boundary condition<br><br><div class="ecxgmail_quote">On Thu, Sep 8, 2011 at 19:57, amrit poudel <span dir="ltr"><<a href="mailto:amrit_pou@hotmail.com">amrit_pou@hotmail.com</a>></span> wrote:<br><blockquote class="ecxgmail_quote" style="border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">After running my simulation multiple times on a multiprocessor computer I've just verified that using iterative solver (default gmres) in PETSc to solve a linear system of equations ( Cx=b) with more than 2 processors setting ALWAYS lead to erroneous result. Running identical code with identical setting except for the number of processors ( set this to 2) ALWAYS gives me correct result .<br>
</div></blockquote><div><br></div><div>You have to explain what "erroneous result" means here.</div><div> </div><blockquote class="ecxgmail_quote" style="border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><br>I am really not sure what is the point behind including iterative solvers if they result into erroneous result on a multiprocessor computer. The result I get from multiprocessor computer is a complete garbage, so I am really not talking about small percentage of error here. Also, if somebody could enlighten why the iterative solvers are error prone on multiprocessors that will be highly appreciated. </div>
</blockquote></div><br><div>Well let's not jump to conclusions. Iterative solvers can fail, as can direct solvers, but it's more common that (a) the problem you are building is not actually the same or the results are being misinterpreted or (b) the default methods are not working well and needs some algorithmic adjustments for your problem.</div>
<div><br></div><div>Please explain what kind of problem you are solving, how you are going about it, and what symptoms you have observed.</div></div>                                            </div></body>
</html>