<div dir="ltr"><div dir="ltr">I'm sorry, would you mind clarifying? I think my email was so long and rambling that it's tough for me to understand which part was being answered. </div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, Jun 11, 2022 at 7:38 PM Matthew Knepley <<a href="mailto:knepley@gmail.com">knepley@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Sat, Jun 11, 2022 at 8:32 PM Samuel Estes <<a href="mailto:samuelestes91@gmail.com" target="_blank">samuelestes91@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto">Hello,<div><br></div><div>My question concerns preallocation for Mats in adaptive FEM problems. When the grid refines, I destroy the old matrix and create a new one of the appropriate (larger size). When the grid “un-refines” I just use the same (extra large) matrix and pad the extra unused diagonal entries with 1’s. The problem comes in with the preallocation. I use the MatPreallocator, MatPreallocatorPreallocate() paradigm which requires a specific sparsity pattern. When the grid un-refines, although the total number of nonzeros allocated is (most likely) more than sufficient, the particular sparsity pattern changes which leads to mallocs in the MatSetValues routines and obviously I would like to avoid this.</div><div><br></div><div>One obvious solution is just to destroy and recreate the matrix any time the grid changes, even if it gets smaller. By just using a new matrix every time, I would avoid this problem although at the cost of having to rebuild the matrix more often than necessary. This is the simplest solution from a programming perspective and probably the one I will go with.</div><div><br></div><div>I'm just curious if there's an alternative that you would recommend? Basically what I would like to do is to just change the sparsity pattern that is created in the MatPreallocatorPreallocate() routine. I'm not sure how it works under the hood, but in principle, it should be possible to keep the memory allocated for the Mat values and just assign them new column numbers and potentially add new nonzeros as well. Is there a convenient way of doing this? One thought I had was to just fill in the MatPreallocator object with the new sparsity pattern of the coarser mesh and then call the MatPreallocatorPreallocate() routine again with the new MatPreallocator matrix. I'm just not sure how exactly that would work since it would have already been called for the FEM matrix for the previous, finer grid. </div><div><br></div><div>Finally, does this really matter? I imagine the bottleneck (assuming good preallocation) is in the solver so maybe it doesn't make much difference whether or not I reuse the old matrix. In that case, going with option 1 and simply destroying and recreating the matrix would be the way to go just to save myself some time.</div><div><br></div><div>I hope that my question is clear. If not, please let me know and I will clarify. I am very curious if there's a convenient solution for the second option I mentioned to recycle the allocated memory and redo the sparsity pattern. </div></div></div></blockquote><div><br></div><div>I have not run any tests of this kind of thing, so I cannot say definitively.</div><div><br></div><div>I can say that I consider the reuse of memory a problem to be solved at allocation time. You would hope that a good malloc system would give</div><div>you back the same memory you just freed when getting rid of the prior matrix, so you would get the speedup you want using your approach.</div></div></div></blockquote><div><br></div><div>What do you mean by "your approach"? Do you mean the first option where I just always destroy the matrix? Are you basically saying that when I destroy the old matrix and create a new one, it should just give me the same block of memory that was just freed by the destruction of the previous one?</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Second, I think the allocation cost is likely to pale in comparison to the cost of writing the matrix itself (passing all those indices and values through</div><div>the memory bus), and so reuse of the memory is not that important (I think).</div></div></div></blockquote><div><br></div><div>This seems to suggest that the best option is just to destroy and recreate and not worry about "re-preallocating". Do I understand that correctly? </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="auto"><div><div id="gmail-m_3424368180409387714gmail-m_-331987257212042523m_5382549414573094640AppleMailSignature" dir="ltr">Thanks!</div></div><div id="gmail-m_3424368180409387714gmail-m_-331987257212042523m_5382549414573094640AppleMailSignature" dir="ltr"><br></div><div id="gmail-m_3424368180409387714gmail-m_-331987257212042523m_5382549414573094640AppleMailSignature">Sam</div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>