[MOAB-dev] DMMOAB (PETSc 3.5.1)

Tim Tautges timothy.tautges at cd-adapco.com
Tue Aug 19 11:05:19 CDT 2014


Note that -R will give you contiguous element numbering, but for an unstructured mesh both that and contiguous vertex 
numbering simultaneously (in a serial mesh instance, in which mbpart is run) is impossible.  As Vijay says, renumbering 
after parallel loading & resolving/ghosting is the only way to fix that (and should be possible, I think).

- tim

On 08/19/2014 11:00 AM, Gerd Heber wrote:
> 'mbpart -R' doesn't appear to make a difference. Same behavior. G.
>
> -----Original Message-----
> From: Vijay S. Mahadevan [mailto:vijay.m at gmail.com]
> Sent: Tuesday, August 19, 2014 10:48 AM
> To: Gerd Heber
> Cc: MOAB dev
> Subject: Re: [MOAB-dev] DMMOAB (PETSc 3.5.1)
>
> Yes, the entities in your sets with PARALLEL_PARTITION tag look badly segmented. You could try running the original mesh through the mbpart tool with the reorder option (-R) to get a new mesh with contiguous entity numbering. Let me know if the behavior reverts when using this new mesh.
>
> Vijay
>
> On Tue, Aug 19, 2014 at 10:41 AM, Gerd Heber <gheber at hdfgroup.org> wrote:
>> Maybe that's what's going on. Attached is the output from "mbsize -ll"
>> and the range returned by DMMoabGetAllVertices on process 1.
>> There are quite a few gaps. Does that make sense?
>>
>> G.
>>
>> -----Original Message-----
>> From: Vijay S. Mahadevan [mailto:vijay.m at gmail.com]
>> Sent: Tuesday, August 19, 2014 10:27 AM
>> To: Gerd Heber
>> Cc: MOAB dev
>> Subject: Re: [MOAB-dev] DMMOAB (PETSc 3.5.1)
>>
>> You could use the mbsize tool installed at $MOAB_INSTALL/bin/mbsize
>> with options "-ll" to see all entities
>>
>> mbsize -ll <filename>
>>
>> You can track down the PARALLEL_PARTITION tag on entity sets and find out whether the corresponding vertices in the element are number contiguously (in terms of GLOBAL_ID). If this is segmented, internally, things get reverted back to a native PETSc Vec in DMMoab.
>> Sorry about this confusion and I should've documented this better.
>> This inconsistent behavior needs to change and I'm working on a patch that will possibly perform renumbering on the fly so that contiguous memory access is available for MOAB based Vecs.
>>
>> Vijay
>>
>> On Tue, Aug 19, 2014 at 10:13 AM, Gerd Heber <gheber at hdfgroup.org> wrote:
>>> What's the best way to verify that? G.
>>>
>>> -----Original Message-----
>>> From: Vijay S. Mahadevan [mailto:vijay.m at gmail.com]
>>> Sent: Tuesday, August 19, 2014 9:58 AM
>>> To: Gerd Heber
>>> Cc: MOAB dev
>>> Subject: Re: [MOAB-dev] DMMOAB (PETSc 3.5.1)
>>>
>>>> DMMoabCreateVector(dm, existing_tag, PETSC_NULL, PETSC_TRUE,
>>>> PETSC_FALSE, &X)
>>>
>>> Yes, this should preserve the values in X vector. There is currently an implementation quirk that underneath the MOAB specific Vec, we check whether the local entities (vertices) are numbered contiguously so that tag_iterate can be used. If that's not the case, it actually creates a native PETSc Vec underneath and manages the memory through that. I am working on a patch to remove this limitation but I'm not sure whether you have hit this issue now.
>>>
>>> Can you just verify whether your local vertices in the mesh per
>>> processor are contiguously arranged ? i.e.,
>>>
>>> P1: (1-10) P2: (11-20) instead of P1: (1-5,11-15), P2: (6-10, 16-20)
>>>
>>> I will let you know once this patch is ready (along with a PR to track in PETSc) so that you can try it out.
>>>
>>> Vijay
>>>
>>> On Tue, Aug 19, 2014 at 9:49 AM, Gerd Heber <gheber at hdfgroup.org> wrote:
>>>> Vijay, here's something that I find confusing, or maybe I'm just doing something wrong.
>>>>
>>>> I call
>>>>
>>>> DMMoabCreateVector(dm, existing_tag, PETSC_NULL, PETSC_TRUE,
>>>> PETSC_FALSE, &X)
>>>>
>>>> and would expect X to have the values of existing_tag (non-zero). But X's values are all zero.
>>>> Is that the expected behavior?
>>>>
>>>> Thanks, G.
>>>>
>>>>
>>>>
>>>>

-- 
Timothy J. Tautges
Manager, Directed Meshing, CD-adapco
Phone: 608-354-1459
timothy.tautges at cd-adapco.com


More information about the moab-dev mailing list