[petsc-users] Block size

Eric Chamberland Eric.Chamberland at giref.ulaval.ca
Fri May 22 09:05:00 CDT 2015


Hi,

I have another question about this.

I am actually working on a multi-variable problem (chemical species and 
voltage in brain), but all variables are not fully coupled with others. 
  Let's say I have scalar variables A, B, C and D coupled like this:

   A B C D
A x     x
B   x   x
C     x x
D x x x x

I first number my variables continuously on each vertex, so that I have 
a "block size" of 4 on each vertex.

With an {SEQ,MPI}AIJ matrix, I can preallocate the matrix by "scalar 
lines" them minimizing the memory allocated for the structure.

My question is:  Do all preconditionners are able to handle "sparse 
blocks" when block_size > 1 in {SEQ,MPI}AIJ matrix?  Do they still have 
some gain with "sparse blocks"?

Right now, I am over-allocation {SEQ,MPI}AIJ matrix since I use 
MatXAIJSetPreallocation with bs=4...  But I am thinking of writing a 
different function to not over-allocate memory when a block_size>1 
{SEQ,MPI}AIJ matrix with "sparse blocks" is used.  For example, for a 2d 
problem, if I use bs=4 I have:

   Mat Object:  (Solveur_Direct)   1 MPI processes
     type: seqaij
     rows=8588, cols=8588, bs=4
     total: nonzeros=232816, allocated nonzeros=234832
     total number of mallocs used during MatSetValues calls =0
       using I-node routines: found 2147 nodes, limit used is 5

but if I use bs=1 I have:

   Mat Object:  (Solveur_Direct)   1 MPI processes
     type: seqaij
     rows=8588, cols=8588
     total: nonzeros=100516, allocated nonzeros=100516
     total number of mallocs used during MatSetValues calls =0
       not using I-node routines

which is 2 times less memory for this (toy)-problem.  I am partly 
responsible for this, since I count non-zeros for blocks because I am 
forced to do so if use MatXAIJSetPreallocation.  Since we think to go 
very large, let's say 100M unknowns, I want to avoid such 
over-allocation.  But before modifying this, I am querying your knowledge...

thanks!

Eric


On 10/09/2014 04:19 AM, Barry Smith wrote:
>
> On Oct 9, 2014, at 3:12 AM, Natacha BEREUX <natacha.bereux at gmail.com> wrote:
>
>> Dear PETSc users,
>>
>> I am a bit confused  about blocksizes in MPI matrices.
>>
>> I define a MPI matrix A with a certain blocksize, let say bs = 3.
>> As far as I understand, the sparse pattern of the matrix A is made of square blocks of size 3x3, regardless of the actual values of the terms of  A.
>> Am I right ?
>
>     Only if you use the BAIJ or SBAIJ matrix format. If you use AIJ then it still stores the values without regard to the block size but it carries knowledge of the block size around and that gets used in some places such as the GAMG solver.
>>
>> The matrix A is distributed among several processors: does the local number of rows of A on each processor have to be a multiple of bs   ?
>
>    Correct
>>
>> I want to use a multigrid preconditionner to  solve the linear system of matrix A : A x = b
>> Is it mandatory to define the right hand side b as a vector with the same blocksize bs=3 ?
>
>    It probably doesn’t matter, but if you know something has an associated block size it is generally best to capture it as soon as you can in the problem.
>
>     Barry
>
>>
>> Thank you very much of your help,
>> Best regards,
>> Natacha



More information about the petsc-users mailing list