[petsc-users] multiply a mpibaij matrix by its block diagonal inverse

Xiangdong epscodes at gmail.com
Wed Feb 14 08:45:52 CST 2018


Yes, the preconditioner is like a multi-stage pccomposite preconditioner.
In one stage, I just precondition a subset of matrix corresponding to
certain physical unknowns. To get a better decoupled submatrix, I need to
apply that operation.

Thanks.

Xiangdong



On Wed, Feb 14, 2018 at 9:39 AM, Matthew Knepley <knepley at gmail.com> wrote:

> On Wed, Feb 14, 2018 at 9:29 AM, Xiangdong <epscodes at gmail.com> wrote:
>
>> The reason for the operation invdiag(A)*A is to have a decoupled
>> matrix/physics for preconditioning. For example, after the transformation,
>> the diagonal block is identity matrix ( e.g. [1,0,0;0,1,0;0,0,1]  for
>> bs=3). One can extract a submatrix (e.g. corresponding to only first
>> unknown) and apply special preconditioners for the extracted/decoupled
>> matrix. The motivation is that after the transformation, one can get a
>> better decoupled matrix to preserve the properties of the unknowns.
>>
>
> Barry's point is that this operation is usually rolled into the
> preconditioner itself, as in his example of PBJACOBI. Are you building this
> preconditioner yourself?
>
>    Matt
>
>
>> Thanks.
>>
>> Xiangdong
>>
>> On Tue, Feb 13, 2018 at 6:27 PM, Smith, Barry F. <bsmith at mcs.anl.gov>
>> wrote:
>>
>>>
>>>  In general you probably don't want to do this. Most good
>>> preconditioners (like AMG) rely on the matrix having the "natural" scaling
>>> that arises from the discretization and doing a scaling like you describe
>>> destroys that natural scaling. You can use PCPBJACOBI to use point block
>>> Jacobi preconditioner on the matrix without needing to do the scaling up
>>> front. The ILU preconditioners for BAIJ matrices work directly with the
>>> block structure so again pre-scaling the matrix buys you nothing. PETSc
>>> doesn't have any particularly efficient routines for computing what you
>>> desire, the only way to get something truly efficient is to write the code
>>> directly using the BAIJ data structure, doable but probably not worth it.
>>>
>>>   Barry
>>>
>>>
>>> > On Feb 13, 2018, at 5:21 PM, Xiangdong <epscodes at gmail.com> wrote:
>>> >
>>> > Hello everyone,
>>> >
>>> > I have a block sparse matrices A created from the DMDA3d. Before
>>> passing the matrix to ksp solver, I want to apply a transformation to this
>>> matrix: namely A:= invdiag(A)*A. Here invdiag(A) is the inverse of the
>>> block diagonal of A. What is the best way to get the transformed matrix?
>>> >
>>> > At this moment, I created a new mat IDA=inv(diag(A)) by looping
>>> through each row and call MatMatMult to get B=invdiag(A)*A, then destroy
>>> the temporary matrix B. However, I prefer the in-place transformation if
>>> possible, namely, without the additional matrix B for memory saving purpose.
>>> >
>>> > Do you have any suggestion on compute invdiag(A)*A for mpibaij matrix?
>>> >
>>> > Thanks for your help.
>>> >
>>> > Best,
>>> > Xiangdong
>>> >
>>> >
>>> >
>>> >
>>>
>>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180214/568fdb10/attachment-0001.html>


More information about the petsc-users mailing list