[MOAB-dev] synchronisation of ranges

Vijay S. Mahadevan vijay.m at gmail.com
Thu Apr 2 11:42:17 CDT 2015


Lukasz, PSTATUS_SHARED just implies that an entity is shared between
at least two processors and PSTATUS_MULTISHARED implies that
definitely more than 2 processors share the entity [1]. You can see
that the latter is a subset of the former.

It is normal to have entities with bit status that are PSTATUS_SHARED
and PSTATUS_MULTISHARED depending on the partiton of your mesh. As an
example, on a mesh with leading dimension d, typically the d-1
dimensional entities you get are shared entities only and on d-2 (with
d=3), you could get several multi-shared entities (also true for
vertices).

Vijay

[1] ParallelComm.cpp:2495 -
http://ftp.mcs.anl.gov/pub/fathom/moab-docs/ParallelComm_8cpp_source.html

On Thu, Apr 2, 2015 at 10:19 AM, Lukasz Kaczmarczyk
<Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
> Hello Vijay,
>
> I need one clarification, under what condition entities are PSTATUS_MULTISHARED, I have a case that I think entity should be multi-shared, but it is only PSTATUS_SHARED.
>
> Can you explain at one condition entity is MULTISHARED.
>
> Kind regards,
> Lukasz
>
>
>
>> On 1 Apr 2015, at 23:15, Lukasz Kaczmarczyk <Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
>>
>>
>>> On 1 Apr 2015, at 22:52, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>>
>>> I think I understand your use case a little better now but there are
>>> still points of confusion.
>>>
>>>> Proc 0 Range { 1,2,3,4,6 (shared) }
>>>> Proc 1 Range { 7,8,9, 10 (shared) }
>>>>
>>>> After synchronisation,
>>>>
>>>> Proc 0 Range { 1,2,3,4,6,10 }
>>>> Proc 1 Range { 6,7,8,9,10 }
>>>
>>> Are these entity handles in each of the processors or are you storing
>>> the GLOBAL_ID corresponding to the entity handles here ? If you are
>>> storing the entity_handles directly, then you could decipher the local
>>> id of the handle which usually start from 1 on each processor.
>>
>> Those are not IDs, I have in mind entity handles.
>>
>>>
>>> The reason I ask is because you cannot directly use a Range here since
>>> locally only EntityHandle 6 is defined in proc_0 and 10 is defined on
>>> proc_1. EntityHandle=10 does not exist in proc_0 since the handle is
>>> not associated with a vertex. If however, you probe at the remote
>>> handle of the shared entity, then you could decipher that on proc_1,
>>> the local entity 10 (that is shared) corresponds to the local handle
>>> on proc_0 as 6.
>>
>> Yes, I am aware of this., I exchange owner entity handles, to avoid this problem.
>>
>>>
>>> Part of the mis-communication here is also how MOAB looks at
>>> shared/ghosted vs how PETSc defines ghosted components. I had to go
>>> through similar mapping issues when writing the DMMoab interface in
>>> PETSc.
>>
>> In principle I do all communication using PETSc, that way I follow that problem. In my case I have some additional complexities related to different approximation spaces. I keep in tags global, petsc local and petsc global indices. On each entity I can have many tages, related to difftent approx. spaces, fields or ranks.  If I have synchronised ranges, all communication from that point can be done by petsc vectors.
>>
>> I have to schemes, mesh on all processor and parallel algebra. And parallel mesh and parallel algebra. Now I make some improvements in second one, that why is this problem.
>>
>>>
>>>> Such that exist global range { 1,2,3,4,6,7,8,9,10 } distributed on two processors, where entities 6 and 10 are shared. With that at hand I can do some operations on this global range. I do not have to exchange values on tags each time I updated tags. Some times I can calculate those values or those are set from ghosted petsc vector. Using tag exchange I will need to have additional avoidable communication.
>>>
>>> Back to the problem at hand: you could choose to filter out only the
>>> shared vertices and call pcomm->get_remote_handles [1] to create a
>>> global list of entities on which you want to set tag data. Please be
>>> careful to not call exchange/reduce tags or any functions in
>>> ParallelComm that might call exchange_tags internally since it might
>>> overwrite the data that you explicitly might set on the shared
>>> entities.
>>
>> That could be very useful. I overlooked this. Thanks.
>>
>> Lukasz
>>
>>>
>>> Vijay
>>>
>>> [1] http://ftp.mcs.anl.gov/pub/fathom/moab-docs/classmoab_1_1ParallelComm.html#a07c87013b6216b47a410dc8f1a1b3904
>>>
>>> On Wed, Apr 1, 2015 at 4:26 PM, Lukasz Kaczmarczyk
>>> <Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
>>>> Hello,
>>>>
>>>> Yes, in some sense is similar.
>>>>
>>>> However, I do not want to exchange tags, I like to set some values to tags (not necessary the same values on each processor) on some entities collectively.  As an input I have some range of entities on each processor, however I don;t know if it is complete.
>>>>
>>>>
>>>> Proc 0 Range { 1,2,3,4,6 (shared) }
>>>> Proc 1 Range { 7,8,9, 10 (shared) }
>>>>
>>>> After synchronisation,
>>>>
>>>> Proc 0 Range { 1,2,3,4,6,10 }
>>>> Proc 1 Range { 6,7,8,9,10 }
>>>>
>>>> Such that exist global range { 1,2,3,4,6,7,8,9,10 } distributed on two processors, where entities 6 and 10 are shared. With that at hand I can do some operations on this global range. I do not have to exchange values on tags each time I updated tags. Some times I can calculate those values or those are set from ghosted petsc vector. Using tag exchange I will need to have additional avoidable communication.
>>>>
>>>> Regards,
>>>> L.
>>>>
>>>>
>>>>
>>>>> On 1 Apr 2015, at 22:06, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>>>>
>>>>> This functionality looks quite similar to what exchange_tags routine
>>>>> provides on ParallelComm. Have you tried this to synchronize the tag
>>>>> data already ?
>>>>>
>>>>> http://ftp.mcs.anl.gov/pub/fathom/moab-docs/classmoab_1_1ParallelComm.html#ab27be002508fa7b3bf2ad1f68461f1e9
>>>>>
>>>>> I'll look over the function you implemented again to better understand
>>>>> your use-case if exchange_tags doesn't solve the issue.
>>>>>
>>>>> Vijay
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Apr 1, 2015 at 3:15 PM, Lukasz Kaczmarczyk
>>>>> <Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
>>>>>> Hello Vijay,
>>>>>>
>>>>>> Yes, this is first step, next one is to synchronise entities in ranges between processors.
>>>>>>
>>>>>> Imagine that you partitioned tetrahedral, then you take subset of those tetrahedrals (on each part), and next you take adjacencies, f.e. faces of those tetrahedral. On each processor you need to do collectively set data on tags of those faces. On some processors you can any tetrahedral which is part of the subset, but you can have some faces which are on skin of that subset. In such a case you need synchronise such entities in the range.
>>>>>>
>>>>>> I already made some eclectic implementation of this, see line,
>>>>>> 491 PetscErrorCode Core::synchronise_entities(Range &ents,int verb) {
>>>>>>
>>>>>> in
>>>>>> http://userweb.eng.gla.ac.uk/lukasz.kaczmarczyk/MoFem/html/_core_8cpp_source.html
>>>>>>
>>>>>> It looks that it working, but not tested properly.
>>>>>>
>>>>>> Kind regards,
>>>>>> Lukasz
>>>>>>
>>>>>>
>>>>>>> On 1 Apr 2015, at 20:30, Vijay S. Mahadevan <vijay.m at gmail.com> wrote:
>>>>>>>
>>>>>>> Lukasz,
>>>>>>>
>>>>>>> The Range data-structure stores locally visible data (EntityHandles)
>>>>>>> only and you can store these from various queries on the Interface
>>>>>>> class. If you specifically want to store the shared entities in a
>>>>>>> range, you can apply a Pstatus filter like below:
>>>>>>>
>>>>>>> Range vlocal, vowned, vghost, adjs;
>>>>>>> // Get all local vertices: owned + ghosted (including shared)
>>>>>>> merr = mbiface->get_entities_by_dimension(0, 0, vlocal, false);MB_CHK_ERR(merr);
>>>>>>>
>>>>>>> // filter based on parallel status
>>>>>>> merr = pcomm->filter_pstatus(vlocal,PSTATUS_NOT_OWNED,PSTATUS_NOT,-1,vowned);MB_CHK_ERR(merr);
>>>>>>>
>>>>>>> // filter all the non-owned and shared entities out of the list
>>>>>>> adjs = moab::subtract(vlocal, vowned);
>>>>>>> merr = pcomm->filter_pstatus(adjs,PSTATUS_GHOST|PSTATUS_INTERFACE,PSTATUS_OR,-1,&vghost);MB_CHK_ERR(merr);
>>>>>>> adjs = moab::subtract(adjs, vghost);
>>>>>>> vlocal = moab::subtract(vlocal, adjs);
>>>>>>>
>>>>>>> Instead of PSTATUS_GHOST, you could use PSTATUS_SHARED if you wanted
>>>>>>> to filter different set of (shared) entities locally. Hope this helps.
>>>>>>> If not, let us know what is unclear.
>>>>>>>
>>>>>>> Vijay
>>>>>>>
>>>>>>> On Wed, Apr 1, 2015 at 6:43 AM, Lukasz Kaczmarczyk
>>>>>>> <Lukasz.Kaczmarczyk at glasgow.ac.uk> wrote:
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> It is quick method to synchronise ranges.
>>>>>>>>
>>>>>>>> 1) Range on each processor keeps entities, some of those entities are shared between one or more processors.
>>>>>>>>
>>>>>>>> 2) I need collectively synchronise that ranges, such that if entity is in range on one of processors is as well in range on other processors (only if it is shared).
>>>>>>>>
>>>>>>>> 3) I do not need to send entity, only synchronise entities in range which are shared.
>>>>>>>>
>>>>>>>> This is needed for collective operations on tags, instead of exchanging tags values, I need to synchronise range and then collectively set tags.
>>>>>>>>
>>>>>>>>
>>>>>>>> Kind regards,
>>>>>>>> Lukasz Kaczmarczyk
>>>>>>>>
>>>>>>
>>>>
>>
>


More information about the moab-dev mailing list