<div dir="ltr">Of course I'm on a plane right now and in the middle of this algorithm and didn't think about the obvious use of VecSetValue() with with ADD_VALUES and a 1... duh.<div><br></div><div>I was actually just looking for a VecIncrementValue() or some such.</div>
<div><br></div><div>Please ignore unless you see an even more efficient way to do this...</div><div><br></div><div>Derek</div><div><br></div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jan 23, 2014 at 6:32 PM, Derek Gaston <span dir="ltr"><<a href="mailto:friedmud@gmail.com" target="_blank">friedmud@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">I have a funky algorithm I'm working on... and in it I need to increment individual entries in vectors in parallel and then finalize the vector, summing to get the final value. So, something like this:<div>
<br></div><div>If I have a vector "v" with 6 values in it (all starting at 0) spread across two processors (so that the first 3 entries are on processor 0 and the last three on processor 1).</div><div><br></div>
<div>On processor 0 I need to be able to do:</div><div><br></div><div>v[1]++;</div><div>v[2]++;</div><div>v[1]++;</div><div><br></div><div>Then on processor 1 I'd like to do:</div><div><br></div><div>v[0]++;</div><div>
v[1]++'</div><div>v[5]++;</div><div>v[1]++;</div><div><br></div><div>Then, after finalizing the vector I'd like to end up with a vector that has these values on processor 0:</div><div><br></div><div>v[0] -> 0</div>
<div>v[1] -> 4</div><div>v[2] -> 1</div><div><br></div><div>and on processor 1:</div><div><br></div><div>v[3] -> 0</div><div>v[4] -> 0</div><div>v[5] -> 1</div><div><br></div><div>Is there anything in PETSc that would help here?? I realize that I can roll my own solution using arrays and MPI for this... but it would be handy if I could do this with a PETSc Vec because the next step is to use VecPointwiseDivide() to divide another vector by this funky one pointwise. Both of these vectors will have the same parallel distribution. Also, note that these vectors might be really large (like 300 million to 1 billion entries) so it would be really handy to have the automatic parallelization of Vec...</div>
<div><br></div><div>Thanks for any help...</div><span class="HOEnZb"><font color="#888888"><div><br></div><div>Derek</div></font></span></div>
</blockquote></div><br></div>