[petsc-users] multiply a mpibaij matrix by its block diagonal inverse

Xiangdong epscodes at gmail.com
Wed Feb 14 09:10:43 CST 2018


The idea goes back to the alternate-block-factorization (ABF) method

https://link.springer.com/article/10.1007/BF01932753

and is widely used in the reservoir simulation, where the unknowns are
pressure and saturation. Although the coupled equations are parabolic, the
pressure equations/variables are more elliptic and the saturation equations
are more hyperbolic. People always decouple the transformed linear equation
to obtain a better (more elliptical) pressure matrix and then apply the AMG
preconditioner on the decoupled matrix.

https://link.springer.com/article/10.1007/s00791-016-0273-3

Thanks.

Xiangdong

On Wed, Feb 14, 2018 at 9:49 AM, Smith, Barry F. <bsmith at mcs.anl.gov> wrote:

>
>   Hmm, I never had this idea presented to me, I have no way to know if it
> is particularly good or bad. So essentially you transform the matrix
> "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT
> instead of using PCFIELDSPLIT directly on the original equations.
>
>   Maybe in the long run this should be an option to PCFIEDLSPLIT. In
> general we like the solvers to manage any transformations, not require
> transformations before calling the solvers. I have to think about this.
>
>    Barry
>
>
> > On Feb 14, 2018, at 8:29 AM, Xiangdong <epscodes at gmail.com> wrote:
> >
> > The reason for the operation invdiag(A)*A is to have a decoupled
> matrix/physics for preconditioning. For example, after the transformation,
> the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1]  for
> bs=3). One can extract a submatrix (e.g. corresponding to only first
> unknown) and apply special preconditioners for the extracted/decoupled
> matrix. The motivation is that after the transformation, one can get a
> better decoupled matrix to preserve the properties of the unknowns.
> >
> > Thanks.
> >
> > Xiangdong
> >
> > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. <bsmith at mcs.anl.gov>
> wrote:
> >
> >  In general you probably don't want to do this. Most good
> preconditioners (like AMG) rely on the matrix having the "natural" scaling
> that arises from the discretization and doing a scaling like you describe
> destroys that natural scaling. You can use PCPBJACOBI to use point block
> Jacobi preconditioner on the matrix without needing to do the scaling up
> front. The ILU preconditioners for BAIJ matrices work directly with the
> block structure so again pre-scaling the matrix buys you nothing. PETSc
> doesn't have any particularly efficient routines for computing what you
> desire, the only way to get something truly efficient is to write the code
> directly using the BAIJ data structure, doable but probably not worth it.
> >
> >   Barry
> >
> >
> > > On Feb 13, 2018, at 5:21 PM, Xiangdong <epscodes at gmail.com> wrote:
> > >
> > > Hello everyone,
> > >
> > > I have a block sparse matrices A created from the DMDA3d. Before
> passing the matrix to ksp solver, I want to apply a transformation to this
> matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the
> block diagonal of A. What is the best way to get the transformed matrix?
> > >
> > > At this moment, I created a new mat IDA=inv(diag(A)) by looping
> through each row and call MatMatMult to get B=invdiag(A)*A, then destroy
> the temporary matrix B. However, I prefer the in-place transformation if
> possible, namely, without the additional matrix B for memory saving purpose.
> > >
> > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix?
> > >
> > > Thanks for your help.
> > >
> > > Best,
> > > Xiangdong
> > >
> > >
> > >
> > >
> >
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180214/50e632d1/attachment.html>


More information about the petsc-users mailing list