[petsc-dev] threadcomm memory leak
Shri
abhyshr at mcs.anl.gov
Tue Jul 17 22:19:50 CDT 2012
On Jul 17, 2012, at 10:06 PM, Barry Smith wrote:
>
> On Jul 17, 2012, at 9:58 PM, Shri wrote:
>
>>
>> On Jul 17, 2012, at 4:22 PM, Barry Smith wrote:
>>
>>>
>>> On Jul 17, 2012, at 2:24 PM, Shri wrote:
>>>
>>>>
>>>> On Jul 17, 2012, at 1:17 PM, Barry Smith wrote:
>>>>
>>>>>
>>>>> Why all this messy attribute check on Petsc_Counter_keyval ... stuff? Why not just ALWAYS call PetscCommDuplicate() and then check it for Petsc_ThreadComm_keyval and stick in the threadcomm if not there? That is, there is no reason to spread knowledge of Petsc_Counter_keyval into other places since other places can just use PetscCommDuplicate() to manage that.
>>>> Thanks, Did what you suggested and pushed.
>>>> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/70470108a897
>>>>
>>>> Should we generalize the Attach/Detach stuff instead of having specific implementations for each attribute.
>>>>
>>>> PetscCommAttachAttribute(MPI_Comm comm,MPI_Keyval,void *attr)
>>>> PetscCommDetachAttribute(MPI_Comm,MPI_keyval)
>>>>
>>>> One hiccup i see is for attributes that have their own reference counting, for e.g., threadcomm and Hong's elemental grid stuff.
>>>
>>> Do they really need to have their own reference counting?
>> Yes, because several communicators could possibly share them.
>
> Ok, then the destructor called by the MPI_Comm_del() would decrease the reference count by one and if it gets to zero then does the delete as usual.
Yes, this is what is currently done. PetscThreadCommDestroy, the destruction routine for the threadcomm, is called by MPI_Comm_del(). PetscThreadCommDestroy destroys the threadcomm object when its reference count reaches zero.
>
>>
>>> If they are ALWAYS imbedded inside an MPI attribute then we can let MPI do the reference counting and there is no need for its own counting.
>> How do we do let MPI do the reference counting if different communicators are being used?
>>> But if you are passing them around directly wily-nily in subroutine calls then they do need their own reference counting. Can you see if it is possible for them NOT to have their own reference counting?
>>
>> One way i see this happening is by having a global struct that has the keyval and a reference counter
>>
>> typedef struct{
>> PetscMPIInt PetscThreadComm_keyval; /* MPI attribute key *
>> PetscInt refct; /* reference count */
>> }PetscCommAttr;
>>
>> PetscCommAttr threadcomm_attr;
>>
>
> No, No, this is overdesigning it.
>
> Barry
>
>> instead of just having a global attribute key as done right now.
>>
>> and then we could have
>> /* Attaches the attribute on the comm and increments the reference count
>> PetscCommAttributeAttach(MPI_Comm comm,PetscCommAttr threadcomm_attr, void* attr_val);
>> /* Detaches the attribute and decrements the reference count
>> PetscCommAttributeDetach(MPI_Comm comm,PetscCommAttr threadcomm_attr);
>>
>> Shri
>>
>>>
>>> Barry
>>>
>>>>
>>>> Thanks,
>>>> Shri
>>>>
>>>>>
>>>>> Barry
>>>>>
>>>>>
>>>>> +PetscErrorCode PetscThreadCommAttach(MPI_Comm comm,PetscThreadComm tcomm)
>>>>> +{
>>>>> + PetscErrorCode ierr;
>>>>> + MPI_Comm icomm;
>>>>> + PetscMPIInt flg,flg1,flg2;
>>>>> + void *ptr;
>>>>> + PetscCommCounter *counter;
>>>>> +
>>>>> + PetscFunctionBegin;
>>>>> + ierr = MPI_Attr_get(comm,Petsc_Counter_keyval,&counter,&flg);CHKERRQ(ierr);
>>>>> + if(!flg) { /* Communicator not initialized yet */
>>>>> + ierr = PetscCommDuplicate(comm,&icomm,PETSC_NULL);CHKERRQ(ierr);
>>>>> + tcomm->refct++;
>>>>> + ierr = MPI_Attr_put(icomm,Petsc_ThreadComm_keyval,tcomm);CHKERRQ(ierr);
>>>>> + } else {
>>>>> + ierr = MPI_Attr_get(comm,Petsc_InnerComm_keyval,&ptr,&flg1);CHKERRQ(ierr);
>>>>> + if(flg1) {
>>>>> + ierr = PetscMemcpy(&ptr,&icomm,sizeof(MPI_Comm));CHKERRQ(ierr);
>>>>> + ierr = MPI_Attr_get(icomm,Petsc_ThreadComm_keyval,&ptr,&flg2);CHKERRQ(ierr);
>>>>> + if(!flg2) {
>>>>> + tcomm->refct++;
>>>>> + ierr = MPI_Attr_put(icomm,Petsc_ThreadComm_keyval,tcomm);CHKERRQ(ierr);
>>>>> + }
>>>>> + }
>>>>> + }
>>>>> + PetscFunctionReturn(0);
>>>>>
>>>>>
>>>>> On Jul 17, 2012, at 3:14 AM, Shri wrote:
>>>>>
>>>>>> Barry, Jed,
>>>>>> Please see the attached patch based on Barry's suggestions. I tested this with MPI and MPIUNI and did not see any memory leaks. Let me know what you think.
>>>>>>
>>>>>> Thanks,
>>>>>> Shri<threadcomm.patch>
>>>>>>
>>>>>>
>>>>>> On Jul 16, 2012, at 11:00 PM, Barry Smith wrote:
>>>>>>
>>>>>>>
>>>>>>> Note that in general I would advocate in any code (especially PETSc code) NEVER blinding putting an attribute into a MPI_Comm, always check if the attribute is already there and only put it there if it is not already there.
>>>>>>>
>>>>>>> Barry
>>>>>>>
>>>>>>> On Jul 16, 2012, at 10:48 PM, Jed Brown wrote:
>>>>>>>
>>>>>>>> On Mon, Jul 16, 2012 at 10:44 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>>>>>>>> /* PETSC_COMM_SELF = PETSC_COMM_WORLD for MPIUNI */
>>>>>>>> #if !defined(PETSC_HAVE_MPIUNI)
>>>>>>>> ierr = PetscCommDuplicate(PETSC_COMM_WORLD,&icomm,PETSC_NULL);CHKERRQ(ierr);
>>>>>>>> ierr = MPI_Attr_put(icomm,Petsc_ThreadComm_keyval,(void*)tcomm);CHKERRQ(ierr);
>>>>>>>> tcomm->refct++; /* Share the threadcomm with PETSC_COMM_SELF */
>>>>>>>> #endif
>>>>>>>>
>>>>>>>> ierr = PetscCommDuplicate(PETSC_COMM_SELF,&icomm,PETSC_NULL);CHKERRQ(ierr);
>>>>>>>> ierr = MPI_Attr_put(icomm,Petsc_ThreadComm_keyval,(void*)tcomm);CHKERRQ(ierr);
>>>>>>>>
>>>>>>>>
>>>>>>>> I would not do it this way. Instead I would write a general routine that attached a threadcomm to a MPI_Comm; this routine would get the threadcomm_keyval and if it did NOT find it then would be put the attribute, otherwise it would know one was already there. Say it is called PetscThreadCommAttach(MPI_Comm, threadcomm); then in this routine you would just write
>>>>>>>>
>>>>>>>> PetscThreadCommAttach(PETSC_COMM_WORLD, tcomm);
>>>>>>>> PetscThreadCommAttach(PETSC_COMM_SELF,tcomm); /* won't attr it again for MPIUni because it is already there */
>>>>>>>>
>>>>>>>> This looks good, but there is also a ref-counting check needed in PetscThreadCommDetach/Destroy because the thread pool (presumably) needs to be closed before PetscFinalize returns.
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
More information about the petsc-dev
mailing list