<p dir="ltr">We won't know until 64k and 128k jobs get thru the queue. Will keep u posted. Thanks for your help.</p>
<p dir="ltr">David Trebotich<br>
sent from mobile<br>
(510) 384-6868</p>
<div class="gmail_quote">On May 29, 2015 8:32 AM, "Jed Brown" <<a href="mailto:jed@jedbrown.org">jed@jedbrown.org</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> writes:<br>
<br>
> I don't need your branch because 1) we are not doing any communication in<br>
> this VecAssembly<br>
<br>
There is still synchronization cost in determining that nothing needs to<br>
be done. Moreover, using a size P MPI_Allreduce for that (in<br>
PetscMaxSum) is non-scalable. My branch uses MPI_Reduce_scatter_block<br>
(scalable) when available (MPI-2.2).<br>
<br>
> and I added the IGNORE stuff<br>
<br>
So it takes no time now and everything is groovy?<br>
</blockquote></div>