<div dir="ltr">Thanks for this information. Could you tell me an efficient way to do this in PETSc? I am planning to use at least 32 threads and need to minimize the synchronization overhead Any suggestions?<div><br></div>
<div>Thanks!</div><div>Wen</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Mon, Jun 23, 2014 at 10:59 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">Wen Jiang <<a href="mailto:jiangwen84@gmail.com">jiangwen84@gmail.com</a>> writes:<br>
<br>
> Dear all,<br>
><br>
> I am trying to change my MPI finite element code to OPENMP one. I am not<br>
> familiar with the usage of OPENMP in PETSc and could anyone give me some<br>
> suggestions?<br>
><br>
> To assemble the matrix in parallel using OpenMP pragmas, can I directly<br>
> call MATSETVALUES(ADD_VALUES) or do I need to add some locks around it?<br>
<br>
</div></div>You need to ensure that only one thread is setting values on a given<br>
matrix at any one time.<br>
</blockquote></div><br></div>