<div dir="ltr">Thanks a lot, Barry! I see that you had implemented the bs=3 special case. I will play with these codes and add at least bs=2 case and try to get it working for parallel baij. I will let you know the update.<div><br></div><div>Thank you.</div><div><br></div><div>Xiangdong </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 14, 2018 at 2:57 PM, Smith, Barry F. <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
In the PETSc git branch barry/feature-baij-<wbr>blockdiagonal-scale I have done the "heavy lifting" for what you need. See <a href="https://bitbucket.org/petsc/petsc/branch/barry/feature-baij-blockdiagonal-scale" rel="noreferrer" target="_blank">https://bitbucket.org/petsc/<wbr>petsc/branch/barry/feature-<wbr>baij-blockdiagonal-scale</a><br>
<br>
It scales the Seq BAIJ matrix by its block diagonal. You will need to write a routine to also scale the right hand side vector by the block diagonal and then you can try the preconditioner for sequential code. Write something like VecBlockDiagonalScale(Vec,<wbr>const PetscScalar *). You get the block size from the vector.<br>
<br>
<br>
Later you or I can add the parallel version (not much more difficult). I don't have time to work on it now.<br>
<br>
Let us know if you have any difficulties.<br>
<span class="HOEnZb"><font color="#888888"><br>
<br>
Barry<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
> On Feb 14, 2018, at 9:10 AM, Xiangdong <<a href="mailto:epscodes@gmail.com">epscodes@gmail.com</a>> wrote:<br>
><br>
> The idea goes back to the alternate-block-factorization (ABF) method<br>
><br>
> <a href="https://link.springer.com/article/10.1007/BF01932753" rel="noreferrer" target="_blank">https://link.springer.com/<wbr>article/10.1007/BF01932753</a><br>
><br>
> and is widely used in the reservoir simulation, where the unknowns are pressure and saturation. Although the coupled equations are parabolic, the pressure equations/variables are more elliptic and the saturation equations are more hyperbolic. People always decouple the transformed linear equation to obtain a better (more elliptical) pressure matrix and then apply the AMG preconditioner on the decoupled matrix.<br>
><br>
> <a href="https://link.springer.com/article/10.1007/s00791-016-0273-3" rel="noreferrer" target="_blank">https://link.springer.com/<wbr>article/10.1007/s00791-016-<wbr>0273-3</a><br>
><br>
> Thanks.<br>
><br>
> Xiangdong<br>
><br>
> On Wed, Feb 14, 2018 at 9:49 AM, Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> Hmm, I never had this idea presented to me, I have no way to know if it is particularly good or bad. So essentially you transform the matrix "decoupling the physics alone the diagonal" and then do PCFIELDSPLIT instead of using PCFIELDSPLIT directly on the original equations.<br>
><br>
> Maybe in the long run this should be an option to PCFIEDLSPLIT. In general we like the solvers to manage any transformations, not require transformations before calling the solvers. I have to think about this.<br>
><br>
> Barry<br>
><br>
><br>
> > On Feb 14, 2018, at 8:29 AM, Xiangdong <<a href="mailto:epscodes@gmail.com">epscodes@gmail.com</a>> wrote:<br>
> ><br>
> > The reason for the operation invdiag(A)*A is to have a decoupled matrix/physics for preconditioning. For example, after the transformation, the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1] for bs=3). One can extract a submatrix (e.g. corresponding to only first unknown) and apply special preconditioners for the extracted/decoupled matrix. The motivation is that after the transformation, one can get a better decoupled matrix to preserve the properties of the unknowns.<br>
> ><br>
> > Thanks.<br>
> ><br>
> > Xiangdong<br>
> ><br>
> > On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > In general you probably don't want to do this. Most good preconditioners (like AMG) rely on the matrix having the "natural" scaling that arises from the discretization and doing a scaling like you describe destroys that natural scaling. You can use PCPBJACOBI to use point block Jacobi preconditioner on the matrix without needing to do the scaling up front. The ILU preconditioners for BAIJ matrices work directly with the block structure so again pre-scaling the matrix buys you nothing. PETSc doesn't have any particularly efficient routines for computing what you desire, the only way to get something truly efficient is to write the code directly using the BAIJ data structure, doable but probably not worth it.<br>
> ><br>
> > Barry<br>
> ><br>
> ><br>
> > > On Feb 13, 2018, at 5:21 PM, Xiangdong <<a href="mailto:epscodes@gmail.com">epscodes@gmail.com</a>> wrote:<br>
> > ><br>
> > > Hello everyone,<br>
> > ><br>
> > > I have a block sparse matrices A created from the DMDA3d. Before passing the matrix to ksp solver, I want to apply a transformation to this matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the block diagonal of A. What is the best way to get the transformed matrix?<br>
> > ><br>
> > > At this moment, I created a new mat IDA=inv(diag(A)) by looping through each row and call MatMatMult to get B=invdiag(A)*A, then destroy the temporary matrix B. However, I prefer the in-place transformation if possible, namely, without the additional matrix B for memory saving purpose.<br>
> > ><br>
> > > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix?<br>
> > ><br>
> > > Thanks for your help.<br>
> > ><br>
> > > Best,<br>
> > > Xiangdong<br>
> > ><br>
> > ><br>
> > ><br>
> > ><br>
> ><br>
> ><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div>