<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
It works for 2 and 3 processors, and I just found out that it does not work for 1 processor either :(. It also does not work for any number of processors greater than 6. Please check my previous email for further detail. I have attached my source code and everything there.<br><br><div>
Thanks for your help.<br>-Amrit<br><br></div><br><br><div>> Date: Thu, 8 Sep 2011 15:06:24 -0500<br>> From: mccomic@mcs.anl.gov<br>> To: petsc-users@mcs.anl.gov<br>> Subject: Re: [petsc-users] iterative solver in PETSc on a multiprocessor computer<br>> <br>> When you say that the runs are "identical setting except for the number of processors" does that include running with 3 processors? If the code works on 1 and 3 but not 2, that would be very weird.<br>> <br>> -Mike<br>> <br>> ----- Original Message -----<br>> From: "amrit poudel" <amrit_pou@hotmail.com><br>> To: petsc-users@mcs.anl.gov<br>> Sent: Thursday, September 8, 2011 12:59:29 PM<br>> Subject: [petsc-users] iterative solver in PETSc on a multiprocessor computer<br>> <br>> <br>> <br>> <br>> <br>> After running my simulation multiple times on a multiprocessor computer I've just verified that using iterative solver (default gmres) in PETSc to solve a linear system of equations ( Cx=b) with more than 2 processors setting ALWAYS lead to erroneous result. Running identical code with identical setting except for the number of processors ( set this to 2) ALWAYS gives me correct result . <br>> <br>> <br>> I am really not sure what is the point behind including iterative solvers if they result into erroneous result on a multiprocessor computer. The result I get from multiprocessor computer is a complete garbage, so I am really not talking about small percentage of error here. Also, if somebody could enlighten why the iterative solvers are error prone on multiprocessors that will be highly appreciated. <br>> <br>> I am very hopeful that there is a way around to this problem, because PETSc is such a powerful and useful library that I really do not want to give up on this and start something else from scratch. <br>> <br>> <br>> Would you think that a DIRECT SOLVER would circumvent this problem? My problem is that I have a very large system of equations and the size of a sparse coefficient matrix is huge ( > 1e+8). I assemble this matrix in MATLAB, write to a binary file, and read it in PETSc. So I really need to be able to solve this system of equations in a cluster of computers (which inherently has multiprocessors and distributed memory setting). Does this mean I am completely out of luck with PETSc's iterative solver package and the only hope for me is the direct solver? I do have MUMPS downloaded and compiled with PETSc, so I will give that a try and see what results I obtain, but I am really surprised that iterative solvers are no good in a large multiprocessor settings. <br>> <br>> Any insights, suggestions/advice will be highly appreciated. <br>> <br>> Thanks. <br>> <br>> PS (I can attach my entire code, plots that compare the results obtained by solving Cx=b in 2 processors vs 12 or 6 processors if any body wants to take a look at it. I get garbage if I run iterative solver on 12 processors) <br></div>                                            </div></body>
</html>