<div dir="ltr">Sorry, should read: "any one MPI process is not involved in more than ~2000 <font size="2"><b>communicators</b>"</font><div><font size="2"><br></font></div><div><font size="2">Derek</font></div></div><br><div class="gmail_quote"><div dir="ltr">On Tue, Apr 3, 2018 at 11:47 AM Derek Gaston <<a href="mailto:friedmud@gmail.com">friedmud@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">On Tue, Apr 3, 2018 at 10:31 AM Satish Balay <<a href="mailto:balay@mcs.anl.gov" target="_blank">balay@mcs.anl.gov</a>> wrote:<br></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On Tue, 3 Apr 2018, Derek Gaston wrote:<br>> Which does bring up a point: I have been able to do solves before with<br>
> ~50,000 separate PETSc solves without issue. Is it because I was working<br>
> with MVAPICH on a cluster? Does it just have a higher limit?<br>
<br>
Don't know - but thats easy to find out with a simple test code..<br></blockquote></div></div><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote><div><br></div><div>I get 2044 using mvapich on my cluster too.</div><div><br></div><div>The only thing I can think of as to why those massive problems work for me is that any one MPI process is not involved in more than ~2000 processors (because the communicators are split as you go down the hierarchy). At most, a single MPI process will see ~hundreds of PETSc solves but not thousands.</div><div><br></div><div>That said: it's just because of the current nature of the solves I'm doing - it's definitely possible to have that not be the case with MOOSE.</div></div></div><div dir="ltr"><div class="gmail_quote"><div><br></div><div>Derek</div></div></div></blockquote></div>