On Sat, Jul 14, 2012 at 2:43 AM, Olga Tramontano <span dir="ltr"><<a href="mailto:tramontanoolga@yahoo.it" target="_blank">tramontanoolga@yahoo.it</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div style="font-size:12pt;font-family:times new roman,new york,times,serif"><div>Hi all</div><div>I was trying to study Petsc for GPUs... I don't understand this: in the sequential implementation there's one process that uses a signle GPU... and that's ok!</div>
<div>What about the parallel implementation for GPUs? Are there multiple processes using the same GPU, or each proess uses a separate GPU like its own...?</div></div></div></blockquote><div><br></div><div>The parallelism is managed by MPI, and we reuse the serial code for interacting with GPUs. Right now,</div>
<div>We associate process round-robin with GPUs on the machine, but this should be easily configurable if</div><div>that does not work for you.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div><div style="font-size:12pt;font-family:times new roman,new york,times,serif"><div>Thanks</div></div></div></blockquote></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>