<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Sun, Apr 27, 2014 at 6:25 AM, Justin Dong <span dir="ltr"><<a href="mailto:jsd1@rice.edu" target="_blank">jsd1@rice.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi Matt,<div><br></div><div>For option 1), that would be using MPI and not any special functions in PETSc? I ask since I've never done parallel programming before. I tried using OpenMP to parallelize that for loop but it resulted in some errors and I didn't investigate further, but I'm assuming it's because each process wasn't communicating properly with the others during MatSetValues?</div>
</div></blockquote><div><br></div><div>Yes, using MPI, so each process owns a set of elements that it loops over. The Mat object manages the global</div><div>values as long as you use global indices for the (row, column). There are of course many refinements for this,</div>
<div>but I think the thing to do is get something working fast, and then optimize it.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div>Sincerely,</div><div>Justin</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Sun, Apr 27, 2014 at 5:57 AM, Matthew Knepley <span dir="ltr"><<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div>On Sun, Apr 27, 2014 at 4:14 AM, Justin Dong <span dir="ltr"><<a href="mailto:jsd1@rice.edu" target="_blank">jsd1@rice.edu</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr">Hi all,<div><br></div><div>I'm currently coding a finite element problem in PETSc. I'm computing all of the matrices by myself and would prefer to do it that way. I want to parallelize the assembly of the global matrices. This is a simplified version of the assembly routine (pseudocode):</div>
<div><br></div><div>for (k = 0; k < nelements; ++k)</div><div>{</div><div> get_index(k,ie,je); /* ie and je are arrays that indicate where to place A_local */</div><div> compute_local_matrix(A_local,...);</div><div>
<br></div><div> MatSetValues(A_global, nlocal, ie, nlocal, je, A_local, ADD_VALUES);</div><div>}</div><div><br></div><div>This is for DG finite elements and I know the matrix easily be assembled in parallel. Even on my laptop, this would allow me to significantly speed up my code. The only problem is, I'm not sure how to do this in PETSc. I'd assume I need to use MatCreateMPIAIJ instead of MatCreateSeqAIJ, but wasn't able to find any using tutorials on how I might do this. </div>
</div></blockquote><div><br></div></div></div><div>1) If you just split this loop across processes, it would work immediately. However, that can be non-optimal in terms</div><div> of communication.</div><div><br></div>
<div>2) PETSc matrices are distributed by contiguous blocks of rows (see manual section on matrices), so you would like</div>
<div> those row blocks to correspond roughly to your element blocks.</div><div><br></div><div>3) You will also have to preallocate, but that should be the same as you do now for the sequential case, except you</div><div>
check whether a column is inside the diagonal block.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div>Sincerely,</div><div>Justin</div></div><span><font color="#888888">
</font></span></blockquote></div><span><font color="#888888"><br><br clear="all"><span class="HOEnZb"><font color="#888888"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</font></span></font></span></div></div>
</blockquote></div><br></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener
</div></div>