parallel solvers and matrix structure

Matthew Knepley knepley at gmail.com
Sun Jan 25 20:42:31 CST 2009


On Sun, Jan 25, 2009 at 8:19 PM, Khan, Irfan <irfan.khan at gatech.edu> wrote:
> Dear Petsc team
>
> Firstly, thanks for developing PETSc. I have been using it to parallelize a linear finite element code to use with a parallel lattice Boltzmann code. It has helped me a lot untill now.
>
> I have some questions about the way the parallel solvers handles matrices with varying zeros in the matrix.
>
> I have found that the more MatSetValue() commands I use in my code the slower it gets. Therefore I initialize the parallel Stiffness matrix to 0.0. I then fill in the values using an "if" conditions to eliminate zero entries into the parallel Stiffness matrix. This reduces the number of times MatSetValue() is called greatly (5-15 times). I also approximate the number of non-zero entries into the parallel matrix, that I create using MatCreateMPIAIJ. I have attached outputs of running my code with "-info"; one with the "if" condition and the other without the conditions (at the end of the email).

1) The real problem here I think is not the number of times you call
MAtSetValues(), but the fact that your matrix preallocation is
incorrect if you do not
exclude some of the zero values. If you fix this, the speed should be
about the same.

> I have also compared the matrices generated by using the "if" condition and the one generated without the "if" condition. They are the same except for the extra zero entries in the later case. But what I realize is that during solving, the matrix generated with the "if" condition is not converging while the matrix generated without the "if" conditions converges in 17 iterations. I use KSPCG.

2) I am guessing that you are using ILU. This depends on the nonzero
pattern of the matrix, and thus will change between these two cases.

  Matt

> It would help me a lot if I can get some suggestions on how to use MatSetValue() optimally, the reason for KSPCG failing to converge and if something can be done. Also any suggestions on if I should not use an "if" condition to enter values into the Stiffness matrix to eliminate zero entries.
>
> I will be glad to send any other information if needed.
>
> Thanks in advance
> Best Regards
> Irfan
>
>
>
>
> WITHOUT "IF" CONDITION
> < [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 3464
> < [0] MatAssemblyBegin_MPIAIJ(): Stash has 432 entries, uses 0 mallocs.
> < [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 66; storage space: 306 unneeded,4356 used
> < [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 216
> < [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 66
> < [0] Mat_CheckInode(): Found 14 nodes of 66. Limit used: 5. Using Inode routines
> < [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 66; storage space: 465 unneeded,3576 used
> < [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 183
> < [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 66
> < [1] Mat_CheckInode(): Found 23 nodes of 66. Limit used: 5. Using Inode routines
>
> < [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 66; storage space: 243 unneeded,972 used
> < [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 72
> < [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 66
> < [1] Mat_CheckCompressedRow(): Found the ratio (num_zerorows 12)/(num_localrows 66) < 0.6. Do not use CompressedRow routines.
>
> < [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 66; storage space: 504 unneeded,1116 used
> < [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 99
> < [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 66
> < [0] Mat_CheckCompressedRow(): Found the ratio (num_zerorows 0)/(num_localrows 66) < 0.6. Do not use CompressedRow routines.
>
>
> WITH "IF" CONDITION
>> [0] MatStashScatterBegin_Private(): Mesg_to: 1: size: 632
>> [0] MatAssemblyBegin_MPIAIJ(): Stash has 78 entries, uses 0 mallocs.
>> [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 66; storage space: 118 unneeded,1304 used
>> [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
>> [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 26
>> [0] Mat_CheckInode(): Found 66 nodes out of 66 rows. Not using Inode routines
>> [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 66; storage space: 353 unneeded,1033 used
>> [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 6
>> [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 26
>> [1] Mat_CheckInode(): Found 64 nodes out of 66 rows. Not using Inode routines
>
>> [1] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 24; storage space: 14 unneeded,121 used
>> [1] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
>> [1] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 12
>> [1] Mat_CheckCompressedRow(): Found the ratio (num_zerorows 48)/(num_localrows 66) > 0.6. Use CompressedRow routines.
>
>> [0] MatAssemblyEnd_SeqAIJ(): Matrix size: 66 X 18; storage space: 14 unneeded,121 used
>> [0] MatAssemblyEnd_SeqAIJ(): Number of mallocs during MatSetValues() is 0
>> [0] MatAssemblyEnd_SeqAIJ(): Maximum nonzeros in any row is 11
>> [0] Mat_CheckCompressedRow(): Found the ratio (num_zerorows 42)/(num_localrows 66) > 0.6. Use CompressedRow routines.
>
>
>
>
>
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener


More information about the petsc-users mailing list