On Fri, Jun 15, 2012 at 9:18 PM, Alexander Grayver <span dir="ltr"><<a href="mailto:agrayver@gfz-potsdam.de" target="_blank">agrayver@gfz-potsdam.de</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<u></u>
<div bgcolor="#ffffff" text="#000000">
On 15.06.2012 14:46, Matthew Knepley wrote:
<blockquote type="cite">On Fri, Jun 15, 2012 at 8:31 PM, Alexander Grayver <span dir="ltr"><<a href="mailto:agrayver@gfz-potsdam.de" target="_blank">agrayver@gfz-potsdam.de</a>></span>
wrote:<br>
<div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0pt 0pt 0pt 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<div bgcolor="#ffffff" text="#000000"> Matt,<br>
<br>
According to that code:<br>
<br>
<pre width="80"><a name="137f04d12db00842_137f0222fdf86feb_line486">486: </a><strong><font color="#4169e1"><a name="137f04d12db00842_137f0222fdf86feb_MatMult_MPIDense"></a><a>PetscErrorCode</a> MatMult_MPIDense(<a>Mat</a> mat,<a>Vec</a> xx,<a>Vec</a> yy)</font></strong>
<a name="137f04d12db00842_137f0222fdf86feb_line487">487: </a>{
<a name="137f04d12db00842_137f0222fdf86feb_line488">488: </a> Mat_MPIDense *mdn = (Mat_MPIDense*)mat->data;
<a name="137f04d12db00842_137f0222fdf86feb_line492">492: </a> <a>VecScatterBegin</a>(mdn->Mvctx,xx,mdn->lvec,<a>INSERT_VALUES</a>,<a>SCATTER_FORWARD</a>);
<a name="137f04d12db00842_137f0222fdf86feb_line493">493: </a> <a>VecScatterEnd</a>(mdn->Mvctx,xx,mdn->lvec,<a>INSERT_VALUES</a>,<a>SCATTER_FORWARD</a>);
<a name="137f04d12db00842_137f0222fdf86feb_line494">494: </a> MatMult_SeqDense(mdn->A,mdn->lvec,yy);
<a name="137f04d12db00842_137f0222fdf86feb_line495">495: </a> <font color="#4169e1">return</font>(0);
<a name="137f04d12db00842_137f0222fdf86feb_line496">496: </a>}
</pre>
Each process has its own local copy of the vector? <br>
</div>
</blockquote>
<div><br>
</div>
<div>I am not sure what your point is. VecScatter is just an
interface that has many implementations. </div>
</div>
</blockquote>
<br>
I'm trying to estimate the amount of data needs to be communicated
over all processes during this operation. <br>
In debugger I see VecScatter from the code above reduces to the
MPI_Allgatherv and results in (assuming vector is distributed
uniformly)<br>
<br>
bytes_send_received = num_of_proc * ((num_of_proc - 1) *
vec_size_local) * 2 * sizeof(PetscScalar)<br>
<br>
Does that look reasonable?<br></div></blockquote><div><br></div><div>This is not really a useful exercise, since</div><div><br></div><div> a) PETSc does not currently have an optimized parallel dense implementation</div>
<div><br></div><div> b) We are implementing an Elemental interface this summer. You can try it out in petsc-dev</div><div><br></div><div> c) Elemental is much more efficient than our simple implementation, and uses a unique</div>
<div> approach to communication (all reductions)</div><div><br></div><div>I would take the comp+comm estimates from Jack's slides on Elemental</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#ffffff" text="#000000">
Thanks.<span class="HOEnZb"><font color="#888888"><br>
<pre cols="72">--
Regards,
Alexander</pre>
</font></span></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>