[petsc-users] Use block Jacobi preconditioner with SNES

Matthew Knepley knepley at gmail.com
Mon Aug 27 07:50:27 CDT 2018


On Mon, Aug 27, 2018 at 4:12 AM Ali Reza Khaz'ali <arkhazali at cc.iut.ac.ir>
wrote:

>  > Okay, interesting.  I take it you either are not running in parallel
> or need to have several subdomains (of varying size) per process.
>  > One approach would be to use PCASM (with zero overlap, it is
> equivalent to Block Jacobi) and then use -mat_partitioning_type to
> select a partitioner (could be a general graph partitioner or could by a
> custom implementation that you provide).  I don't know if this would
> feel overly clumsy or ultimately be a cleaner and more generic approach.
>
> Thanks for the answer. I'm still running a serial code. I plan to
> parallelized it after finding a suitable solver. Unfortunately, I do not
> know how to use PCASM, and therefore, I'm going to learn it. In
> addition, I found another possible solution with MATNEST. However, I do
> not know if MATNEST is suitable for my application or if it can be used
> with SNES. I'd be grateful if you could kindly guide me about it.
>

MATNEST is only a storage optimization after everything works right. It does
not have to do with solving.


>  > Lets discuss this point a bit further. I assume your system is
> sparse. Sparse direct solvers can solve systems fairly efficiently for
> hundreds of thousands of unknowns. How big do you want? Also, do you
> plan on having more than 500K unknowns per process? If not, why not just
> use sparse direct solvers on each process?
>
> Thanks for the answer. My system is sparse, and also a variable sized
> block matrix. For example, for a small size simulation, I have about 7K
> unknowns. For ONE TIME STEP, Intel MKL PARDISO took about 30 minutes to
> solve my system, while occupying about 2.5GB out of my 4GB RAM. Since I
> have to simulate at least 10000 time steps, the runtime (and the
> required RAM) would be unacceptable.
>

Something is very wrong there. I advise you to also try SuperLU and MUMPS.


>  > If none of the suggestions provided is to your taste, why not just
> build the preconditioner matrix yourself? Seems your have precise
> requirements and the relevant info of the individual blocks, so you
> should be able to construct the preconditioner, either using A (original
> operator) or directly from the discrete problem.
>
> Thanks for your answer. As I stated, I have built a preconditioner for
> it. My preconditioner does not require a large memory, however, it has a
> low performance (even on GPU). Therefore, I'm trying to use PETSc
> functions and modules to solve the system more efficiently. I do not
> think there is any other library more suited than PETSc for the job.
>

Iterative methods depend sensitively on the equation, as opposed to direct
solvers which almost do not care. The first step in designing an iterative
solver
that PETSc can implement is to find in the literature where one has worked
for
your problem.

   Matt


> Best Regards,
> Ali
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180827/ee4c6aab/attachment.html>


More information about the petsc-users mailing list