<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Jun 20, 2016 at 8:13 AM, marco restelli <span dir="ltr"><<a href="mailto:mrestelli@gmail.com" target="_blank">mrestelli@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Dear all,<br>
while assembling a matrix in PETsc we have the following pattern:<br>
<br>
<br>
do i=1,number_of_chunks<br>
<br>
! generate a chunk of matrix entries<br>
call compute_ith_chunk( i_idx , j_idx , values )<br>
<br>
! insert those entries<br>
do j=1,size_of_the_ith_chunk<br>
call MatSetValue( mat , i_idx(j),j_idx(j),value(j) , add_values )<br>
enddo<br>
<br>
enddo<br>
<br>
<br>
The problem is that inserting the elements with MatSetValue seems to<br>
have a significant overhead due to memory allocations and<br>
deallocations.<br>
<br>
Is there a way to speed-up this process, preallocating memory?<br>
<br>
Notice that we know the number of elements that we have to insert for<br>
each chunk, but we don't know to which extent they overlap, i.e. we do<br>
not know how many nonzero entries will result in the final matrix.<br>
<br>
Also, the entries do not have a block pattern, so we can not use<br>
MatSetValues.<br></blockquote><div><br></div><div>Run the whole process once to count the entries, and then insert. This usually</div><div>has negligible overhead.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Thank you,<br>
Marco<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>