On Tue, Aug 30, 2011 at 1:36 AM, Jed Brown <span dir="ltr"><<a href="mailto:jedbrown@mcs.anl.gov">jedbrown@mcs.anl.gov</a>></span> wrote:<br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="gmail_quote"><div class="im">On Mon, Aug 29, 2011 at 20:32, Tabrez Ali <span dir="ltr"><<a href="mailto:stali@geology.wisc.edu" target="_blank">stali@geology.wisc.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>If during Vecscatter (say from a global vector x to a local vector y) if 'is' and 'iy' are such that almost all values being scattered happen to be on the local proc then almost 0 or very little MPI calls would be made internally. Is this correct?<br>
</div></blockquote><div><br></div></div><div>Very few values would be sent anywhere. The number of MPI calls depends on the number of processes that need to receive values and the scatter method (methods descriptions: <a href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecScatterCreate.html" target="_blank">http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Vec/VecScatterCreate.html</a>).</div>
</div>
</blockquote></div><br>Also, you can make an event around these calls and get information on the number of messages and the average size.<div><br></div><div> Matt<br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>
-- Norbert Wiener<br>
</div>