[petsc-users] Mat preallocation for adaptive grid

Samuel Estes samuelestes91 at gmail.com
Sat Jun 11 19:43:06 CDT 2022


I'm sorry, would you mind clarifying? I think my email was so long and
rambling that it's tough for me to understand which part was being
answered.

On Sat, Jun 11, 2022 at 7:38 PM Matthew Knepley <knepley at gmail.com> wrote:

> On Sat, Jun 11, 2022 at 8:32 PM Samuel Estes <samuelestes91 at gmail.com>
> wrote:
>
>> Hello,
>>
>> My question concerns preallocation for Mats in adaptive FEM problems.
>> When the grid refines, I destroy the old matrix and create a new one of the
>> appropriate (larger size). When the grid “un-refines” I just use the same
>> (extra large) matrix and pad the extra unused diagonal entries with 1’s.
>> The problem comes in with the preallocation. I use the MatPreallocator,
>> MatPreallocatorPreallocate() paradigm which requires a specific sparsity
>> pattern. When the grid un-refines, although the total number of nonzeros
>> allocated is (most likely) more than sufficient, the particular sparsity
>> pattern changes which leads to mallocs in the MatSetValues routines and
>> obviously I would like to avoid this.
>>
>> One obvious solution is just to destroy and recreate the matrix any time
>> the grid changes, even if it gets smaller. By just using a new matrix every
>> time, I would avoid this problem although at the cost of having to rebuild
>> the matrix more often than necessary. This is the simplest solution from a
>> programming perspective and probably the one I will go with.
>>
>> I'm just curious if there's an alternative that you would recommend?
>> Basically what I would like to do is to just change the sparsity pattern
>> that is created in the MatPreallocatorPreallocate() routine. I'm not sure
>> how it works under the hood, but in principle, it should be possible to
>> keep the memory allocated for the Mat values and just assign them new
>> column numbers and potentially add new nonzeros as well. Is there a
>> convenient way of doing this? One thought I had was to just fill in the
>> MatPreallocator object with the new sparsity pattern of the coarser mesh
>> and then call the MatPreallocatorPreallocate() routine again with the new
>> MatPreallocator matrix. I'm just not sure how exactly that would work since
>> it would have already been called for the FEM matrix for the previous,
>> finer grid.
>>
>> Finally, does this really matter? I imagine the bottleneck (assuming good
>> preallocation) is in the solver so maybe it doesn't make much difference
>> whether or not I reuse the old matrix. In that case, going with option 1
>> and simply destroying and recreating the matrix would be the way to go just
>> to save myself some time.
>>
>> I hope that my question is clear. If not, please let me know and I will
>> clarify. I am very curious if there's a convenient solution for the second
>> option I mentioned to recycle the allocated memory and redo the sparsity
>> pattern.
>>
>
> I have not run any tests of this kind of thing, so I cannot say
> definitively.
>
> I can say that I consider the reuse of memory a problem to be solved at
> allocation time. You would hope that a good malloc system would give
> you back the same memory you just freed when getting rid of the prior
> matrix, so you would get the speedup you want using your approach.
>

What do you mean by "your approach"? Do you mean the first option where I
just always destroy the matrix? Are you basically saying that when I
destroy the old matrix and create a new one, it should just give me the
same block of memory that was just freed by the destruction of the previous
one?

>
> Second, I think the allocation cost is likely to pale in comparison to the
> cost of writing the matrix itself (passing all those indices and values
> through
> the memory bus), and so reuse of the memory is not that important (I
> think).
>

This seems to suggest that the best option is just to destroy and recreate
and not worry about "re-preallocating". Do I understand that correctly?

>
>   Thanks,
>
>       Matt
>
>
>> Thanks!
>>
>> Sam
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20220611/7cbc0673/attachment-0001.html>


More information about the petsc-users mailing list