[petsc-users] A bad commit affects MOOSE
Jed Brown
jed at jedbrown.org
Tue Apr 3 12:52:22 CDT 2018
Derek Gaston <friedmud at gmail.com> writes:
> Sorry, should read: "any one MPI process is not involved in more than ~2000
> *communicators*"
Yes, as intended. Only the ranks in a communicator's group need to know
about the existence of that communicator.
> Derek
>
> On Tue, Apr 3, 2018 at 11:47 AM Derek Gaston <friedmud at gmail.com> wrote:
>
>> On Tue, Apr 3, 2018 at 10:31 AM Satish Balay <balay at mcs.anl.gov> wrote:
>>
>>> On Tue, 3 Apr 2018, Derek Gaston wrote:
>>> > Which does bring up a point: I have been able to do solves before with
>>> > ~50,000 separate PETSc solves without issue. Is it because I was
>>> working
>>> > with MVAPICH on a cluster? Does it just have a higher limit?
>>>
>>> Don't know - but thats easy to find out with a simple test code..
>>>
>>
>> I get 2044 using mvapich on my cluster too.
>>
>> The only thing I can think of as to why those massive problems work for me
>> is that any one MPI process is not involved in more than ~2000 processors
>> (because the communicators are split as you go down the hierarchy). At
>> most, a single MPI process will see ~hundreds of PETSc solves but not
>> thousands.
>>
>> That said: it's just because of the current nature of the solves I'm doing
>> - it's definitely possible to have that not be the case with MOOSE.
>>
>> Derek
>>
More information about the petsc-users
mailing list