[petsc-users] A bad commit affects MOOSE

Derek Gaston friedmud at gmail.com
Tue Apr 3 12:48:46 CDT 2018


Sorry, should read: "any one MPI process is not involved in more than ~2000
*communicators*"

Derek

On Tue, Apr 3, 2018 at 11:47 AM Derek Gaston <friedmud at gmail.com> wrote:

> On Tue, Apr 3, 2018 at 10:31 AM Satish Balay <balay at mcs.anl.gov> wrote:
>
>> On Tue, 3 Apr 2018, Derek Gaston wrote:
>> > Which does bring up a point: I have been able to do solves before with
>> > ~50,000 separate PETSc solves without issue.  Is it because I was
>> working
>> > with MVAPICH on a cluster?  Does it just have a higher limit?
>>
>> Don't know - but thats easy to find out with a simple test code..
>>
>
> I get 2044 using mvapich on my cluster too.
>
> The only thing I can think of as to why those massive problems work for me
> is that any one MPI process is not involved in more than ~2000 processors
> (because the communicators are split as you go down the hierarchy).  At
> most, a single MPI process will see ~hundreds of PETSc solves but not
> thousands.
>
> That said: it's just because of the current nature of the solves I'm doing
> - it's definitely possible to have that not be the case with MOOSE.
>
> Derek
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180403/e62ed42c/attachment.html>


More information about the petsc-users mailing list