[MOAB-dev] vertex to cell adjacencies in parallel
Hong-Jun Kim
hongjun at mcs.anl.gov
Tue Oct 11 13:31:26 CDT 2011
As Tim mentioned, you can reduce your code with the following 1 function to get shared and owned elements.
Range shared_owned_vols;
rval = pcomm.get_shared_entities(-1, shared_owned_vols, 3, true, true);
Thanks.
-----------------------------
Hong-Jun Kim
Post-doc researcher
MCS, Argonne National Laboratory
9700 S. Cass Ave. B240/R2147
Argonne, IL 60439
630-252-4791
hongjun at mcs.anl.gov
-----------------------------
----- Original Message -----
From: "Tim Tautges" <tautges at mcs.anl.gov>
To: "Hong-Jun Kim" <hongjun at mcs.anl.gov>
Cc: "Lorenzo Alessio Botti" <ihabiamx at yahoo.it>, moab-dev at mcs.anl.gov
Sent: Tuesday, October 11, 2011 12:58:21 PM
Subject: Re: [MOAB-dev] vertex to cell adjacencies in parallel
You should also be able to get the same thing with get_shared_entities, passing true for interface and owned.
- tim
On 10/11/2011 11:01 AM, Hong-Jun Kim wrote:
> Hi, Lorenzo
>
> By detouring wrong way, I think I finally found a bug in your code.
> I thought the bug came from MOAB then, I have digged the wrong place.
>
> To get owned elements, the line 226 in your code should be changed as follows (not pstatus[0] != PSTATUS_SHARED).
>
> if (pstatus[0] == PSTATUS_SHARED&& pstatus[0] != PSTATUS_NOT_OWNED&& pstatus[0] != PSTATUS_GHOST)
>
> Moreover, I think you'd better use bitwise operator to get it as follows.
> if (pstatus[0]& PSTATUS_SHARED&& !(pstatus[0]& PSTATUS_NOT_OWNED)&& !(pstatus[0]& PSTATUS_GHOST))
>
> Thanks.
>
> -----------------------------
> Hong-Jun Kim
> Post-doc researcher
> MCS, Argonne National Laboratory
> 9700 S. Cass Ave. B240/R2147
> Argonne, IL 60439
> 630-252-4791
> hongjun at mcs.anl.gov
> -----------------------------
>
> ----- Original Message -----
> From: "Lorenzo Alessio Botti"<ihabiamx at yahoo.it>
> To: "Hong-Jun Kim"<hongjun at mcs.anl.gov>
> Sent: Friday, October 7, 2011 10:52:49 AM
> Subject: Re: [MOAB-dev] vertex to cell adjacencies in parallel
>
> Hi Hong-Jun,
> I attach the code you can run it as
> mpiexec -n 2 [exe_name] -P -R [path_to_one_of_the_mesh_files_here_attached]
>
> Let me explain it a bit.
> I start from a 5x5x5 hex cube with 2 partitions.
>
> The cells on the interface form more or less a 5x5x1 25 hexes row.
> (They do in the hex 27 mesh while in the hex 8 an hex is misplaced for some reason)
>
> 25 elements have a face on the interface, 25 elements share a face with owned cells,
> 20 elements have a face on boundary. You can take a look to the .vtk output.
>
> Let me comment the output
> iface faces size 25 ok
> ghost cells size 50 shared = 25 ok, every face on iface has two neighbors one owned one not owned
>
> Now the code searches the vertex adjacencies for the 25 owned cells, this is the output
> owned entities = 25 adjacent entities not found = 45 this in wrong, the number of adjacencies not found should be 25 smaller, that is equal to number of faces on the boundary.
>
> Thanks a lot.
> Lorenzo
>
>
>
>
>
> On Oct 7, 2011, at 4:53 PM, Hong-Jun Kim wrote:
>
>> Could you please send me the code to replicate?
>> Thanks.
>>
>> -----------------------------
>> Hong-Jun Kim
>> Post-doc researcher
>> MCS, Argonne National Laboratory
>> 9700 S. Cass Ave. B240/R2147
>> Argonne, IL 60439
>> 630-252-4791
>> hongjun at mcs.anl.gov
>> -----------------------------
>>
>> ----- Original Message -----
>>> From: "Lorenzo Alessio Botti"<ihabiamx at yahoo.it>
>>> To: "Tim Tautges"<tautges at mcs.anl.gov>, MOAB-dev at mcs.anl.gov
>>> Sent: Friday, October 7, 2011 9:47:52 AM
>>> Subject: Re: [MOAB-dev] vertex to cell adjacencies in parallel
>>> Thanks a lot for the reply.
>>> Let me know if you need the code to replicate this issue...
>>>
>>>>> For my application it would be very interesting to be able to
>>>>> partition based on entity sets.
>>>>>
>>>>
>>>> If those entity sets form a covering for the entities (each of the
>>>> entities included in exactly one of those sets), and those sets can
>>>> be identified by a tag and optionally tag value, then you should be
>>>> able to already read in parallel based on that partition (which has
>>>> to already be in the file). See section 5 of the user's guide (in
>>>> the doc subdir) and the description of how to read according to a
>>>> material_set partition.
>>>
>>> My request was a bit too cryptic...
>>> I'm now reading parallel meshes generated with mbzoltan with the
>>> partition tag set to PARALLEL_PARTITION.
>>> It would be useful (for geometric h-multigrid, adaptivity based on
>>> element agglomeration ecc...)
>>> to repartition based on tagged entity sets that form a covering for
>>> the entities
>>> (such sets might be generated requiring a lot of partitions to the
>>> partitioner)
>>> instead of mesh elements.
>>> A simpler trick would be to create a lot of tagged entity sets and run
>>> in parallel
>>> with n_processes = n_entity_set/n_entity_sets_per_process, using once
>>> again the
>>> entity sets as mesh elements.
>>> In this context the entity set can be viewed as very general elements
>>> (eg polygons).
>>> In both cases the ability to get the adjacencies of entity set would
>>> be also required.
>>>
>>> By the way I've seen that you already have a polygonal element
>>> implementation
>>> but I don't know if it has been introduced with the same goal.
>>>
>>> Now I'm still working on the standard element dG finite element
>>> library
>>> but later on it would be interesting to explore more general
>>> implementations.
>>>
>>> Lorenzo
>>>
>>> On Oct 7, 2011, at 3:54 PM, Tim Tautges wrote:
>>>
>>>>
>>>>
>>>> On 10/07/2011 04:05 AM, Lorenzo Alessio Botti wrote:
>>>>> Hi all,
>>>>> I have a question about getting adjacencies in parallel.
>>>>>
>>>>> After having partitioned a mesh, resolved shared entities and
>>>>> exchanged ghost cells I fail to
>>>>> update the mesh cell adjacencies so to include ghost cells.
>>>>>
>>>>> The face to elements adjacencies are correctly updated, that is the
>>>>> faces obtained with
>>>>> pcomm.get_iface_entities(1,2,faces); (I'm running on 2 processors)
>>>>> have two neighbors one owned and one non owned.
>>>>>
>>>>> However the vertex to cell adjacencies do not seem to be updated to
>>>>> include ghost cells
>>>>> and even after
>>>>> mb.get_entities_by_dimension(0,3,entities);
>>>>> mb.get_adjacencies(entities, 0, true, vertexEntities,
>>>>> Interface::UNION);
>>>>> entities.clear();
>>>>> mb.get_adjacencies(vertexEntities, 3, true, entities,
>>>>> Interface::UNION);
>>>>> I'm not able to find vertex to not owned cells adjacencies.
>>>>>
>>>>> Is get_adjacencies(...,create if missing=true,...) supposed to work
>>>>> also for non owned cells?
>>>>> It seems that the vertex to non owned cell adjacencies are detected
>>>>> but not created.
>>>>>
>>>>
>>>> That sounds like a bug; Hong-Jun, could you look at that?
>>>>
>>>> MOAB's parallel model is that all locally-represented entities,
>>>> which includes ghost and shared interface entities, should appear
>>>> locally as a serial mesh, with all adjacencies and other API calls
>>>> available.
>>>>
>>>>> Overall MOAB works fine in parallel and
>>>>> pcomm.assign_global_ids(...) is useful.
>>>>> Are you also planning to introduce some repartitioning
>>>>> capabilities?
>>>>
>>>> Yes, this year's work plan includes migrating entities for
>>>> repartitioning. In theory, the partitioning class in tools/mbzoltan
>>>> should work to compute the new partition in parallel, though I think
>>>> it's currently hardwired to run on one processor (on my list to get
>>>> rid of that restriction).
>>>>
>>>>> For my application it would be very interesting to be able to
>>>>> partition based on entity sets.
>>>>>
>>>>
>>>> If those entity sets form a covering for the entities (each of the
>>>> entities included in exactly one of those sets), and those sets can
>>>> be identified by a tag and optionally tag value, then you should be
>>>> able to already read in parallel based on that partition (which has
>>>> to already be in the file). See section 5 of the user's guide (in
>>>> the doc subdir) and the description of how to read according to a
>>>> material_set partition.
>>>>
>>>> - tim
>>>>
>>>>> Thanks for help.
>>>>> Lorenzo
>>>>
>>>> --
>>>> ================================================================
>>>> "You will keep in perfect peace him whose mind is
>>>> steadfast, because he trusts in you." Isaiah 26:3
>>>>
>>>> Tim Tautges Argonne National Laboratory
>>>> (tautges at mcs.anl.gov) (telecommuting from UW-Madison)
>>>> phone: (608) 263-8485 1500 Engineering Dr.
>>>> fax: (608) 263-4499 Madison, WI 53706
>>>>
>
>
--
================================================================
"You will keep in perfect peace him whose mind is
steadfast, because he trusts in you." Isaiah 26:3
Tim Tautges Argonne National Laboratory
(tautges at mcs.anl.gov) (telecommuting from UW-Madison)
phone: (608) 263-8485 1500 Engineering Dr.
fax: (608) 263-4499 Madison, WI 53706
More information about the moab-dev
mailing list