[petsc-users] Allocating the diagonal for MatMPIAIJSetPreallocation

Matthew Knepley knepley at gmail.com
Fri Apr 1 11:50:41 CDT 2022


On Fri, Apr 1, 2022 at 12:45 PM Samuel Estes <samuelestes91 at gmail.com>
wrote:

> Thanks! This seems like it might be what I need. I'm still a little
> unclear on how it works though? My problem is basically that for any given
> row, I know the total non-zeros but not how many occur in the diagonal vs
> off-diagonal. Without knowledge of the underlying grid, I'm not sure how
> there could be a black box utility to figure this out? Am I
> misunderstanding how this is used?
>

So each process gets a stack of rows, [rStart, rEnd). A nonzero (r, c) is
in the diagonal block if rStart <= c < rEnd. So if you know (r, c) for each
nonzero, you know whether it is in the diagonal block.

  Thanks,

     Matt


> On Fri, Apr 1, 2022 at 11:34 AM Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Fri, Apr 1, 2022 at 12:27 PM Samuel Estes <samuelestes91 at gmail.com>
>> wrote:
>>
>>> Hi,
>>>
>>> I have a problem in which I know (roughly) the number of non-zero
>>> entries for each row of a matrix but I don't have a convenient way of
>>> determining whether they belong to the diagonal or off-diagonal part of the
>>> parallel matrix. Is there some way I can just allocate the total number of
>>> non-zeros in a row regardless of which part they belong to? I'm assuming
>>> that this is not possible but I just wanted to check. It seems like it
>>> should be possible in principle since the matrix is only split by the rows
>>> but the columns of a row are all on the same processor (at least as I
>>> understand it). Thanks!
>>>
>>
>> In serial, the matrix is stored by rows. In parallel, it is split into a
>> diagonal and off-diagonal block, so that we can overlap communication and
>> computation in the matvec.
>>
>> However, we have a convenience structure for figuring this out, called
>> MatPreallocator,
>> https://petsc.org/main/docs/manualpages/Mat/MATPREALLOCATOR.html
>> In my code, I wrote a loop around the code that filled up my matrix,
>> which executed twice. On the first pass, I fed in the MatPreallocator
>> matrix. When this finished
>> you can call
>> https://petsc.org/main/docs/manualpages/Mat/MatPreallocatorPreallocate.html#MatPreallocatorPreallocate
>> on your system amtrix, then on the second
>> pass use the system matrix. This was only a few extra lines of code for
>> me. If you want to optimize further, you can have a flag that only computes
>> the values on the
>> second pass.
>>
>>   Thanks,
>>
>>       Matt
>>
>> Sam
>>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220401/a8750597/attachment-0001.html>


More information about the petsc-users mailing list