[petsc-users] Filling a matrix by chunks of nonzero entries

marco restelli mrestelli at gmail.com
Mon Jun 20 12:20:48 CDT 2016


2016-06-20 19:01 GMT+0200, Barry Smith <bsmith at mcs.anl.gov>:
>
>> On Jun 20, 2016, at 10:13 AM, marco restelli <mrestelli at gmail.com> wrote:
>>
>> Dear all,
>>   while assembling a matrix in PETsc we have the following pattern:
>>
>>
>> do i=1,number_of_chunks
>>
>> ! generate a chunk of matrix entries
>> call compute_ith_chunk( i_idx , j_idx , values )
>
>      This just generates a "random" bunch of nonzero entries with no
> structure? What discretization method are you using?

The matrix comes from the discretization of an integral equation and
the structure depends on the convolution kernel. Sure it is not
random, but it has a nontrivial pattern which I would prefer to deal
with in another part of the code. So, for the matrix assembly my
present ansatz is treating the entries as a collection of known but
arbitrary values - I understand that I am probably giving up on some
optimization, but this might be worth if it helps modularizing the
logic of the code.

>> ! insert those entries
>> do j=1,size_of_the_ith_chunk
>>   call MatSetValue( mat , i_idx(j),j_idx(j),value(j) , add_values )
>> enddo
>>
>> enddo
>>
>>
>> The problem is that inserting the elements with MatSetValue seems to
>> have a significant overhead due to memory allocations and
>> deallocations.
>
>
>>
>> Is there a way to speed-up this process, preallocating memory?
>>
>> Notice that we know the number of elements that we have to insert for
>> each chunk, but we don't know to which extent they overlap, i.e. we do
>> not know how many nonzero entries will result in the final matrix.
>
>   Take a look at MatPreallocateInitialize() and its other routines it is
> designed to make it easy to get a good preallocation.

Ah, this looks promising! I will experiment with this.

Thank you,
   Marco


More information about the petsc-users mailing list