[petsc-users] Use block Jacobi preconditioner with SNES

Dave May dave.mayhem23 at gmail.com
Mon Aug 27 03:37:13 CDT 2018


On Mon, 27 Aug 2018 at 10:12, Ali Reza Khaz'ali <arkhazali at cc.iut.ac.ir>
wrote:

>  > Okay, interesting.  I take it you either are not running in parallel
> or need to have several subdomains (of varying size) per process.
>  > One approach would be to use PCASM (with zero overlap, it is
> equivalent to Block Jacobi) and then use -mat_partitioning_type to
> select a partitioner (could be a general graph partitioner or could by a
> custom implementation that you provide).  I don't know if this would
> feel overly clumsy or ultimately be a cleaner and more generic approach.
>
> Thanks for the answer. I'm still running a serial code. I plan to
> parallelized it after finding a suitable solver. Unfortunately, I do not
> know how to use PCASM, and therefore, I'm going to learn it. In
> addition, I found another possible solution with MATNEST. However, I do
> not know if MATNEST is suitable for my application or if it can be used
> with SNES. I'd be grateful if you could kindly guide me about it.
>
>
>
>  > Lets discuss this point a bit further. I assume your system is
> sparse. Sparse direct solvers can solve systems fairly efficiently for
> hundreds of thousands of unknowns. How big do you want? Also, do you
> plan on having more than 500K unknowns per process? If not, why not just
> use sparse direct solvers on each process?
>
> Thanks for the answer. My system is sparse, and also a variable sized
> block matrix. For example, for a small size simulation, I have about 7K
> unknowns. For ONE TIME STEP, Intel MKL PARDISO took about 30 minutes to
> solve my system, while occupying about 2.5GB out of my 4GB RAM. Since I
> have to simulate at least 10000 time steps, the runtime (and the
> required RAM) would be unacceptable.
>
>
>
>  > If none of the suggestions provided is to your taste, why not just
> build the preconditioner matrix yourself? Seems your have precise
> requirements and the relevant info of the individual blocks, so you
> should be able to construct the preconditioner, either using A (original
> operator) or directly from the discrete problem.
>
> Thanks for your answer. As I stated, I have built a preconditioner for
> it.


I mean directly build the preconditioner with the required block diagonal
structure. Then do don't need to use something like PCBJACOBI or PCASM to
extract the block diagonal operator from your original operator.

My preconditioner does not require a large memory, however, it has a
> low performance (even on GPU). Therefore, I'm trying to use PETSc
> functions and modules to solve the system more efficiently. I do not
> think there is any other library more suited than PETSc for the job.
>
>
>
> Best Regards,
> Ali
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180827/1e98e7f1/attachment.html>


More information about the petsc-users mailing list