<br><br><div class="gmail_quote">On Fri, Dec 23, 2011 at 12:55 PM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div class="gmail_quote"><div class="im">On Fri, Dec 23, 2011 at 12:27, Mark F. Adams <span dir="ltr"><<a href="mailto:mark.adams@columbia.edu" target="_blank">mark.adams@columbia.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
A more interesting thing is partition down to the thread level and keep about 100 vertices per thread (this might be to big for a GPU...) </blockquote><div><br></div></div><div>It's fine to have more partitions than threads.</div>
<div class="im">
<div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">and then use locks of some sort for the shared memory synchronization</blockquote><div><br></div></div><div>
It can be lock-free, your thread just waits until a buffer has been marked as updated. Since the reader/writer relationships are predefined, it's not actually a lock. (You can do more general methods lock-free too.)</div>
</div></blockquote><div><br></div><div>You could use</div><div><br></div><div> a) coloring and stream events (easy)</div><div><br></div><div> b) what John Cohen does which I still do not understand</div><div><br></div><div>
We should talk to him</div><div><br></div><div> Matt </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="gmail_quote"><div> </div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>