[petsc-users] PETSc initialization error

Sam Guo sam.guo at cd-adapco.com
Thu Jun 25 14:18:05 CDT 2020


Hi Junchao,
   I now encountered the same error with parallel. I am wondering if there
is a need for parallel fix as well.
[1]PETSC ERROR: #1 PetscInitialize() line 969 in
../../../petsc/src/sys/objects/pinit.c
PETSC ERROR: Logging has not been enabled.
You might have forgotten to call PetscInitialize().
PETSC ERROR: Logging has not been enabled.
You might have forgotten to call PetscInitialize().

On Sat, Jun 20, 2020 at 7:35 PM Sam Guo <sam.guo at cd-adapco.com> wrote:

> Hi Junchao,
>    Your patch works.
>
> Thanks,
> Sam
>
> On Sat, Jun 20, 2020 at 4:23 PM Junchao Zhang <junchao.zhang at gmail.com>
> wrote:
>
>>
>>
>> On Sat, Jun 20, 2020 at 12:24 PM Barry Smith <bsmith at petsc.dev> wrote:
>>
>>>
>>>    Junchao,
>>>
>>>      This is a good bug fix. It solves the problem when PETSc initialize
>>> is called many times.
>>>
>>>      There is another fix you can do to limit PETSc mpiuni running out
>>> of attributes inside a single PETSc run:
>>>
>>>
>>> int MPI_Comm_create_keyval(MPI_Copy_function
>>> *copy_fn,MPI_Delete_function *delete_fn,int *keyval,void *extra_state)
>>> {
>>>
>>>  if (num_attr >= MAX_ATTR){
>>>    for (i=0; i<num_attr; i++) {
>>>      if (!attr_keyval[i].extra_state) {
>>>
>> attr_keyval[i].extra_state is provided by user (could be NULL). We can
>> not rely on it.
>>
>>>         /* reuse this slot */
>>>         attr_keyval[i].extra_state = extra_state;
>>>        attr_keyval[i.]del         = delete_fn;
>>>        *keyval = i;
>>>         return MPI_SUCCESS;
>>>      }
>>>   }
>>>   return MPIUni_Abort(MPI_COMM_WORLD,1);
>>> }
>>>  return MPIUni_Abort(MPI_COMM_WORLD,1);
>>>   attr_keyval[num_attr].extra_state = extra_state;
>>>   attr_keyval[num_attr].del         = delete_fn;
>>>   *keyval                           = num_attr++;
>>>   return MPI_SUCCESS;
>>> }
>>>
>>>   This will work if the user creates tons of attributes but is
>>> constantly deleting some as they new ones. So long as the number
>>> outstanding at one time is < MAX_ATTR)
>>>
>>> Barry
>>>
>>>
>>>
>>>
>>>
>>> On Jun 20, 2020, at 10:54 AM, Junchao Zhang <junchao.zhang at gmail.com>
>>> wrote:
>>>
>>> I don't understand what your session means. Let's try this patch
>>>
>>> diff --git a/src/sys/mpiuni/mpi.c b/src/sys/mpiuni/mpi.c
>>> index d559a513..c058265d 100644
>>> --- a/src/sys/mpiuni/mpi.c
>>> +++ b/src/sys/mpiuni/mpi.c
>>> @@ -283,6 +283,7 @@ int MPI_Finalize(void)
>>>    MPI_Comm_free(&comm);
>>>    comm = MPI_COMM_SELF;
>>>    MPI_Comm_free(&comm);
>>> +  num_attr = 1; /* reset the counter */
>>>    MPI_was_finalized = 1;
>>>    return MPI_SUCCESS;
>>>  }
>>>
>>>
>>> --Junchao Zhang
>>>
>>>
>>> On Sat, Jun 20, 2020 at 10:48 AM Sam Guo <sam.guo at cd-adapco.com> wrote:
>>>
>>>> Typo: I mean “Assuming initializer is only needed once for entire
>>>> session”
>>>>
>>>> On Saturday, June 20, 2020, Sam Guo <sam.guo at cd-adapco.com> wrote:
>>>>
>>>>> Assuming finalizer is only needed once for entire session(?), I can
>>>>> put initializer into the static block to call it once but where do I call
>>>>> finalizer?
>>>>>
>>>>>
>>>>> On Saturday, June 20, 2020, Junchao Zhang <junchao.zhang at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> The counter num_attr should be recycled. But first try to call PETSc
>>>>>> initialize/Finalize only once to see it fixes the error.
>>>>>> --Junchao Zhang
>>>>>>
>>>>>>
>>>>>> On Sat, Jun 20, 2020 at 12:48 AM Sam Guo <sam.guo at cd-adapco.com>
>>>>>> wrote:
>>>>>>
>>>>>>> To clarify, I call PETSc initialize and PETSc finalize everytime I
>>>>>>> call SLEPc:
>>>>>>>
>>>>>>>   PetscInitializeNoPointers(argc,args,nullptr,nullptr);
>>>>>>>
>>>>>>>   SlepcInitialize(&argc,&args,static_cast<char*>(nullptr),help);
>>>>>>>
>>>>>>>   //calling slepc
>>>>>>>
>>>>>>>   SlepcFinalize();
>>>>>>>
>>>>>>>    PetscFinalize();
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Fri, Jun 19, 2020 at 10:32 PM Sam Guo <sam.guo at cd-adapco.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Dear PETSc team,
>>>>>>>>    When I called SLEPc multiple time, I eventually got following
>>>>>>>> error:
>>>>>>>>
>>>>>>>> MPI operation not supported by PETSc's sequential MPI wrappers
>>>>>>>> [0]PETSC ERROR: #1 PetscInitialize() line 967 in
>>>>>>>> ../../../petsc/src/sys/objects/pinit.c
>>>>>>>> [0]PETSC ERROR: #2 SlepcInitialize() line 262 in
>>>>>>>> ../../../slepc/src/sys/slepcinit.c
>>>>>>>> [0]PETSC ERROR: #3 SlepcInitializeNoPointers() line 359 in
>>>>>>>> ../../../slepc/src/sys/slepcinit.c
>>>>>>>> PETSC ERROR: Logging has not been enabled.
>>>>>>>> You might have forgotten to call PetscInitialize().
>>>>>>>>
>>>>>>>>   I debugged: it is because of following in
>>>>>>>> petsc/src/sys/mpiuni/mpi.c
>>>>>>>>
>>>>>>>> if (num_attr >= MAX_ATTR)
>>>>>>>>
>>>>>>>> in function int MPI_Comm_create_keyval(MPI_Copy_function
>>>>>>>> *copy_fn,MPI_Delete_function *delete_fn,int *keyval,void *extra_state)
>>>>>>>>
>>>>>>>> num_attr is declared static and keeps increasing every
>>>>>>>> time MPI_Comm_create_keyval is called.
>>>>>>>>
>>>>>>>> I am using petsc 3.11.3 but found 3.13.2 has the same logic.
>>>>>>>>
>>>>>>>> Is this a bug or I didn't use it correctly?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Sam
>>>>>>>>
>>>>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20200625/f65e80e0/attachment.html>


More information about the petsc-users mailing list