[petsc-users] Problem when solving matrices with identity matrices as diagonal block domains

Adrián Amor aamor at pa.uc3m.es
Fri Feb 2 02:27:55 CST 2018


Thanks for the clarification Barry! And Stefano, thanks for your suggestion!

2018-02-01 20:08 GMT+01:00 Stefano Zampini <stefano.zampini at gmail.com>:

> Note that you don’t need to assemble the 2x2 block matrix, as the solution
> can be computed via a Schur complement argument
>
> given the matrix [I  B; C I] and rhs [f1,f2], you can solve S x_2 = f1 - B
> f2, with S = I - CB, and then obtain x_1 = f1 - B x_2.
>
> On Feb 1, 2018, at 8:34 PM, Adrián Amor <aamor at pa.uc3m.es> wrote:
>
> Thanks, it's true that with MAT_IGNORE_ZERO_ENTRIES I get the same
> performance. I assumed that explicitly calling to KSPSetType(petsc_ksp,
> KSPBCGS, petsc_ierr) it wouldn't use the direct solver from PETSC. Thank
> you for the detailed response, it was really convenient!
>
> 2018-02-01 16:20 GMT+01:00 Smith, Barry F. <bsmith at mcs.anl.gov>:
>
>>
>> 1)   By default if you call MatSetValues() with a zero element the sparse
>> Mat will store the 0 into the matrix. If you do not call it with zero
>> elements then it does not create a zero entry for that location.
>>
>> 2)   Many of the preconditioners in PETSc are based on "nonzero entries"
>> in sparse matrices (here a nonzero entry simply means any location in a
>> matrix where a value is stored -- even if the value is zero). In particular
>> ILU(0) does a LU on the "nonzero" structure of the matrix
>>
>> Hence in your case it is doing ILU(0) on a dense matrix since you set all
>> the entries in the matrix and thus producing a direct solver.
>>
>> The lesson is you should only be setting true nonzero values into the
>> matrix, not zero entries. There is a MatOption MAT_IGNORE_ZERO_ENTRIES
>> which, if you set it, prevents the matrix from creating a location for the
>> zero values. If you set this first on the matrix then your two approaches
>> will result in the same preconditioner and same iterative convergence.
>>
>>   Barry
>>
>> > On Feb 1, 2018, at 2:45 AM, Adrián Amor <aamor at pa.uc3m.es> wrote:
>> >
>> > Hi,
>> >
>> > First, I am a novice in the use of PETSC so apologies for having a
>> newbie mistake, but maybe you can help me! I am solving a matrix of the
>> kind:
>> > (Identity                     (50% dense)block
>> > (50% dense)block     Identity)
>> >
>> > I have found a problem in the performance of the solver when I treat
>> the diagonal blocks as sparse matrices in FORTRAN. In other words, I use
>> the routine:
>> > MatCreateSeqAIJ
>> > To preallocate the matrix, and then I have tried:
>> > 1. To call MatSetValues for all the values of the identity matrices. I
>> mean, if the identity matrix has a dimension of 22x22, I call MatSetValues
>> 22*22 times.
>> > 2. To call MatSetValues only once per row. If the identity matrix has a
>> dimension of 22x22, I call MatSetValues only 22 times.
>> >
>> > With the case 1, the iterative solver (I have tried with the default
>> one and KSPBCGS) only takes one iteration to converge and it converges with
>> a residual of 1E-14. However, with the case 2, the iterative solver takes,
>> say, 9 iterations and converges with a residual of 1E-04. The matrices that
>> are loaded into PETSC are exactly the same (I have written them to a file
>> from the matrix which is solved, getting it with MatGetValues).
>> >
>> > What can be happening? I know that the fact that only takes one
>> iteration is because the iterative solver is "lucky" and its first guess is
>> the right one, but I don't understand the difference in the performance
>> since the matrix is the same. I would like to use the case 2 since my
>> matrices are quite large and it's much more efficient.
>> >
>> > Please help me! Thanks!
>> >
>> > Adrian.
>>
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180202/a6733038/attachment.html>


More information about the petsc-users mailing list