<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Fri, Jan 22, 2016 at 9:27 AM, Mark Adams <span dir="ltr"><<a href="mailto:mfadams@lbl.gov" target="_blank">mfadams@lbl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span><br><div><br></div></span><div>I said the Hypre setup cost is not scalable, </div></div></div></div></blockquote><div><br></div><div>I'd be a little careful here. Scaling for the matrix triple product is hard and hypre does put effort into scaling. I don't have any data however. Do you?</div></div></div></div></blockquote><div><br></div><div>I used it for PyLith and saw this. I did not think any AMG had scalable setup time.</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>but it can be amortized over the iterations. You can quantify this</div><div>just by looking at the PCSetUp time as your increase the number of processes. I don't think they have a good</div><div>model for the memory usage, and if they do, I do not know what it is. However, generally Hypre takes more</div><div>memory than the agglomeration MG like ML or GAMG.</div><div><br></div></div></div></div></blockquote><div><br></div><div>agglomerations methods tend to have lower "grid complexity", that is smaller coarse grids, than classic AMG like in hypre. THis is more of a constant complexity and not a scaling issue though. You can address this with parameters to some extent. But for elasticity, you want to at least try, if not start with, GAMG or ML.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div></div><div> Thanks,</div><div><br></div><div> Matt</div><span><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><br clear="all"><div><div><div dir="ltr">Giang</div></div></div>
<br><div class="gmail_quote">On Mon, Jan 18, 2016 at 5:25 PM, Jed Brown <span dir="ltr"><<a href="mailto:jed@jedbrown.org" target="_blank">jed@jedbrown.org</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>Hoang Giang Bui <<a href="mailto:hgbk2008@gmail.com" target="_blank">hgbk2008@gmail.com</a>> writes:<br>
<br>
</span><span>> Why P2/P2 is not for co-located discretization?<br>
<br>
</span>Matt typed "P2/P2" when me meant "P2/P1".<br>
</blockquote></div><br></div></div>
</blockquote></span></div><br><br clear="all"><span class="HOEnZb"><font color="#888888"><span><div><br></div>-- <br><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</span></font></span></div></div>
</blockquote></div><br></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>