<html>
<head>
<style><!--
.hmmessage P
{
margin:0px;
padding:0px
}
body.hmmessage
{
font-size: 10pt;
font-family:Tahoma
}
--></style>
</head>
<body class='hmmessage'><div dir='ltr'>
Thanks for the hint. However, my system is pretty large (but sparse) and using -pc_type lu preconditioner throws memory related error, and according to it, the memory requirement is astronomical :(<br><br>[1]PETSC ERROR: Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 2022169721 megabytes<br><br>That amount of memory requirement just does not seem right, and I think it is lu that is creating the problem here, though I could be wrong.<br><br>If I use -pc_type none, and then do -pc_factor_mat_solver_package mumps, will it use direct solver(mumps) or some default iterative solver ? Sorry for my elementary question.<br><br>I was under the impression that by setting -pc_type to some preconditioner I was forcing it use iterative solver instead of using mumps. Now that you mentioned I can set -pc_type to any of the available preconditioners and still use the specified direct solver, that cleared up my confusion.<br><br><br>Thanks for your help.<br><br><br><div>> Date: Fri, 9 Sep 2011 22:13:46 -0500<br>> From: hzhang@mcs.anl.gov<br>> To: petsc-users@mcs.anl.gov<br>> Subject: Re: [petsc-users] Fwd: nonzero prescribed boundary condition<br>> <br>> amrit :<br>> ><br>> > If I run a program with the following runtime option (notice that I have not<br>> > included -pc_type lu ), will the program use mumps to solve the linear<br>> > system of equations directly, or does it run with default gmres iterative<br>> > solver? Why do we have to specify a preconditioner for a direct solver ?<br>> ><br>> > -pc_factor_mat_solver_package mumps<br>> <br>> You must include '-pc_type lu' to use mumps.<br>> There are many preconditioners for solving linear system, lu is one of pcs,<br>> and mumps is one of packages implement lu.<br>> <br>> Hong<br>> <br>> ><br>> ><br>> >> From: bsmith@mcs.anl.gov<br>> >> Date: Thu, 8 Sep 2011 14:02:09 -0500<br>> >> To: petsc-users@mcs.anl.gov<br>> >> Subject: Re: [petsc-users] Fwd: nonzero prescribed boundary condition<br>> >><br>> >><br>> >> Are you running with -ksp_converged_reason and -ksp_monitor_true_residual<br>> >> to see if the iterative method is actually converging and how rapidly. Also<br>> >> if you impose a tight tolerance on the iterative solver say with -ksp_rtol<br>> >> 1.e-12 how much do the "solutions" differ, it should get smaller the smaller<br>> >> you make this tolerance (in double precision you cannot expect to use a<br>> >> tolerance less than 1.e-3 or 1.e-14)<br>> >><br>> >> If your "big" matrix is very complicated and comes from no well understood<br>> >> simulation it may be that it is very ill-conditioned, in that case you<br>> >> either need to understand the underlying equations real well to develop an<br>> >> appropriate preconditioner or just use parallel direct solvers (which get<br>> >> slow for very large problems but can handle more ill-conditioning.n To use<br>> >> the MUMPS parallel direct solver you can configure PETSc with<br>> >> --download-mumps --download-scalapack --download-blacs and run the program<br>> >> with -pc_type lu -pc_factor_mat_solver_package mumps<br>> >><br>> >> Barry<br>> >><br>> >><br>> >><br>> >> On Sep 8, 2011, at 12:57 PM, amrit poudel wrote:<br>> >><br>> >> > After running my simulation multiple times on a multiprocessor computer<br>> >> > I've just verified that using iterative solver (default gmres) in PETSc to<br>> >> > solve a linear system of equations ( Cx=b) with more than 2 processors<br>> >> > setting ALWAYS lead to erroneous result. Running identical code with<br>> >> > identical setting except for the number of processors ( set this to 2)<br>> >> > ALWAYS gives me correct result .<br>> >> ><br>> >> > I am really not sure what is the point behind including iterative<br>> >> > solvers if they result into erroneous result on a multiprocessor computer.<br>> >> > The result I get from multiprocessor computer is a complete garbage, so I am<br>> >> > really not talking about small percentage of error here. Also, if somebody<br>> >> > could enlighten why the iterative solvers are error prone on multiprocessors<br>> >> > that will be highly appreciated.<br>> >> ><br>> >> > I am very hopeful that there is a way around to this problem, because<br>> >> > PETSc is such a powerful and useful library that I really do not want to<br>> >> > give up on this and start something else from scratch.<br>> >> ><br>> >> ><br>> >> > Would you think that a DIRECT SOLVER would circumvent this problem? My<br>> >> > problem is that I have a very large system of equations and the size of a<br>> >> > sparse coefficient matrix is huge ( > 1e+8). I assemble this matrix in<br>> >> > MATLAB, write to a binary file, and read it in PETSc. So I really need to be<br>> >> > able to solve this system of equations in a cluster of computers (which<br>> >> > inherently has multiprocessors and distributed memory setting). Does this<br>> >> > mean I am completely out of luck with PETSc's iterative solver package and<br>> >> > the only hope for me is the direct solver? I do have MUMPS downloaded and<br>> >> > compiled with PETSc, so I will give that a try and see what results I<br>> >> > obtain, but I am really surprised that iterative solvers are no good in a<br>> >> > large multiprocessor settings.<br>> >> ><br>> >> > Any insights, suggestions/advice will be highly appreciated.<br>> >> ><br>> >> > Thanks.<br>> >> ><br>> >> > PS (I can attach my entire code, plots that compare the results obtained<br>> >> > by solving Cx=b in 2 processors vs 12 or 6 processors if any body wants to<br>> >> > take a look at it. I get garbage if I run iterative solver on 12 processors)<br>> >><br>> ><br></div>                                            </div></body>
</html>