[petsc-users] Petsc direct solver

amrit poudel amrit_pou at hotmail.com
Fri Sep 9 22:32:14 CDT 2011


Thanks for the hint. However, my system is pretty large (but sparse) and using -pc_type lu preconditioner throws memory related error, and according to it, the memory requirement is astronomical :(

[1]PETSC ERROR: Error reported by MUMPS in numerical factorization phase: Cannot allocate required memory 2022169721 megabytes

That amount of memory requirement just does not seem right, and I think it is lu that is creating the problem here, though I could be wrong.

If I use -pc_type none, and then do -pc_factor_mat_solver_package mumps, will it use direct solver(mumps)  or some default iterative solver ? Sorry for my elementary question.

I was under the impression that by setting -pc_type to some preconditioner I was forcing it use iterative solver instead of using mumps. Now that you mentioned I can set -pc_type to any of the available preconditioners and still use the specified direct solver, that cleared up my confusion.


Thanks for your help.


> Date: Fri, 9 Sep 2011 22:13:46 -0500
> From: hzhang at mcs.anl.gov
> To: petsc-users at mcs.anl.gov
> Subject: Re: [petsc-users] Fwd: nonzero prescribed boundary condition
> 
> amrit :
> >
> > If I run a program with the following runtime option (notice that I have not
> > included -pc_type lu ), will the program use mumps to solve the linear
> > system of equations directly, or does it  run with default gmres iterative
> > solver? Why do we have to specify a preconditioner for a direct solver ?
> >
> >  -pc_factor_mat_solver_package mumps
> 
> You must include '-pc_type lu' to use mumps.
> There are many preconditioners for solving linear system, lu is one of pcs,
> and mumps is one of packages implement lu.
> 
> Hong
> 
> >
> >
> >> From: bsmith at mcs.anl.gov
> >> Date: Thu, 8 Sep 2011 14:02:09 -0500
> >> To: petsc-users at mcs.anl.gov
> >> Subject: Re: [petsc-users] Fwd: nonzero prescribed boundary condition
> >>
> >>
> >> Are you running with -ksp_converged_reason and -ksp_monitor_true_residual
> >> to see if the iterative method is actually converging and how rapidly. Also
> >> if you impose a tight tolerance on the iterative solver say with -ksp_rtol
> >> 1.e-12 how much do the "solutions" differ, it should get smaller the smaller
> >> you make this tolerance (in double precision you cannot expect to use a
> >> tolerance less than 1.e-3 or 1.e-14)
> >>
> >> If your "big" matrix is very complicated and comes from no well understood
> >> simulation it may be that it is very ill-conditioned, in that case you
> >> either need to understand the underlying equations real well to develop an
> >> appropriate preconditioner or just use parallel direct solvers (which get
> >> slow for very large problems but can handle more ill-conditioning.n To use
> >> the MUMPS parallel direct solver you can configure PETSc with
> >> --download-mumps --download-scalapack --download-blacs and run the program
> >> with -pc_type lu -pc_factor_mat_solver_package mumps
> >>
> >> Barry
> >>
> >>
> >>
> >> On Sep 8, 2011, at 12:57 PM, amrit poudel wrote:
> >>
> >> > After running my simulation multiple times on a multiprocessor computer
> >> > I've just verified that using iterative solver (default gmres) in PETSc to
> >> > solve a linear system of equations ( Cx=b) with more than 2 processors
> >> > setting ALWAYS lead to erroneous result. Running identical code with
> >> > identical setting except for the number of processors ( set this to 2)
> >> > ALWAYS gives me correct result .
> >> >
> >> > I am really not sure what is the point behind including iterative
> >> > solvers if they result into erroneous result on a multiprocessor computer.
> >> > The result I get from multiprocessor computer is a complete garbage, so I am
> >> > really not talking about small percentage of error here. Also, if somebody
> >> > could enlighten why the iterative solvers are error prone on multiprocessors
> >> > that will be highly appreciated.
> >> >
> >> > I am very hopeful that there is a way around to this problem, because
> >> > PETSc is such a powerful and useful library that I really do not want to
> >> > give up on this and start something else from scratch.
> >> >
> >> >
> >> > Would you think that a DIRECT SOLVER would circumvent this problem? My
> >> > problem is that I have a very large system of equations and the size of a
> >> > sparse coefficient matrix is huge ( > 1e+8). I assemble this matrix in
> >> > MATLAB, write to a binary file, and read it in PETSc. So I really need to be
> >> > able to solve this system of equations in a cluster of computers (which
> >> > inherently has multiprocessors and distributed memory setting). Does this
> >> > mean I am completely out of luck with PETSc's iterative solver package and
> >> > the only hope for me is the direct solver? I do have MUMPS downloaded and
> >> > compiled with PETSc, so I will give that a try and see what results I
> >> > obtain, but I am really surprised that iterative solvers are no good in a
> >> > large multiprocessor settings.
> >> >
> >> > Any insights, suggestions/advice will be highly appreciated.
> >> >
> >> > Thanks.
> >> >
> >> > PS (I can attach my entire code, plots that compare the results obtained
> >> > by solving Cx=b in 2 processors vs 12 or 6 processors if any body wants to
> >> > take a look at it. I get garbage if I run iterative solver on 12 processors)
> >>
> >
 		 	   		  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20110910/16fb7b5c/attachment-0001.htm>


More information about the petsc-users mailing list