[MOAB-dev] loading a VTK mesh in parallel (.pvtu)

Lorenzo Alessio Botti bottilorenzo at gmail.com
Wed Oct 30 12:21:19 CDT 2013


Ok, got it. Tim and Iulian, thanks a lot for help. 
I’ll let you know as soon as I have a version of the vtu reader/writer which is suitable to be included into moab 
(hopefully I’ll be able to find a suitable xml reader since I discovered that the one I’m using right now is not fully open).

Thanks again.
Lorenzo 


 


On 30 Oct 2013, at 17:47, Tim Tautges <tautges at mcs.anl.gov> wrote:

> 
> 
> On 10/30/2013 11:36 AM, Lorenzo Alessio Botti wrote:
>> I see, thanks for sharing these infos.
>> I understand that in your opinion what I had planned to do is not useful, at least not for efficiency considerations, since the scaling in parallel is quite good.
>> Makes sense. I don’t want to reinvent the wheel.
>> 
>> That leaves me with the problem of dealing with periodic boundaries. How can I force entities on the domain boundaries to be shared across (possibly) different processors?
>> Is there a way to tag the boundary vertices with some periodic_global_id tag and then resolve share entities based on this tag?
>> 
> 
> If you give them the same global id, they'll be tied together in parallel as if they were the same vertex.  If they're both on the same poc, though, they'll remain distinct. Also, if you output the mesh to an .h5m file after resolving, they'll be represented as the same vertex in the file, which might not be what you want.  Just giving you a heads-up about the behavior.  We've run into some tricky behavior that way with structured meshes on the globe (they're horizontally periodic, and sometimes a given proc has a whole horizontal slice, and sometimes not, so we have separate notions of "locally" and "globally" periodic).
> 
> - tim
> 
>> Thanks for help.
>> Lorenzo
>> 
>> 
>> 
>> 
>> 
>> On 30 Oct 2013, at 17:07, Tim Tautges <tautges at mcs.anl.gov> wrote:
>> 
>>> The information compiled during resolve_shared_ents can't really be stored in files, because this information includes the handles a given entity is represented by on other procs, and that depends on load order and (for ghost entities) message arrival times.  The resolution of shared vertices/non-vertices is done using global ids for the vertices, but not global ids for the entities (since those entities, like interior edges/faces, may not be explicitly represented in the mesh file).  However, from our observations, resolve_shared_ents scales rather well, at least up to 16k procs and 32m elements (and probably up to 500k procs and 1b elems, based on some more recent timings).  So, I don't really think you'll have a timing problem with this.  The trouble is if you don't have a global id for vertices.  In that case, you'll have to use the ParallelMergeMesh, as Iulian said.  But even that scales pretty well (though we haven't measured perf on large #'s of procs, just out to m
> aybe a fe
> w k procs).
>>> 
>>> - tim
>>> 
>>> On 10/30/2013 10:54 AM, Lorenzo Alessio Botti wrote:
>>>>> 
>>>>> ------------------------------------------------------------------------------------------------------------------------
>>>>> it will be tough to do it this way. The tags you want to save are all starting with double underscore, (__), and they
>>>>> are not saved in a hdf5 file. My understanding is that you want to save them in your format (vtu?), each part in a
>>>>> different file.
>>>> 
>>>> Yes, exactly what I’d like to do.
>>>> 
>>>>> You will need to restore "MBEntityHandle" type flags, somehow. For example, a node that is shared between 2
>>>>> processors, each processor knows the handle on the other processor, in form of a tag.
>>>>> 
>>>> 
>>>> So I need to save for each local shared entity the entity handles of all the non local shared entities. This makes sense.
>>>> 
>>>>> It will be hard to restore the handle on the other processor, from the information you save;  You can search for
>>>>> global id, of course, but then why do it this way, if you can already do it by calling resolve shared ents? Do you
>>>>> want to rewrite all that logic? And replicate that for higher dimensions, edges, faces shared?
>>>> 
>>>> I see, the problem is that I cannot know the entity handle before reading so I need to store the global id to then
>>>> obtain the entity handle.
>>>> And I also need to store the information regarding the shared processors in order to know where I have to search to
>>>> match the global ids.
>>>> Is there a way to ask other processes for all the entities with a specified tag and value? something like
>>>> get_entities_by_type_and_tag() that works in parallel?
>>>> I guess that this is the logic you were referring to.
>>>> 
>>>>> 
>>>>> Or maybe you can use a local index in each file; The tags you need to save are the 5 parallel tags.
>>>> 
>>>> Do you mean the bits indicating the ParallelStauts?
>>>> 
>>>>> Is the mesh structured? Do you know about ScdInterface? Maybe your mesh is not structured.
>>>> 
>>>> Yes, my meshes are unstructured.
>>>> 
>>>>> 
>>>>> Exchange tags and reduce tags will need to know the handles of entities on the other processors, otherwise you cannot
>>>>> communicate.
>>>>> 
>>>>> But maybe I don't understand the question :(
>>>> 
>>>> I think you got the point and you already helped me clarifying what I actually need.
>>>> Thanks.
>>>> Lorenzo
>>>> 
>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>    The reason for doing so is that in serial some situations are easier to manage, e.g. tag entities as shared on
>>>>>    periodic boundaries and decide who they are going to communicate with.
>>>>> 
>>>>> 
>>>>>    The possibility to resolve in parallel is great in case that the mesh is repartitioned in parallel but if the mesh
>>>>>    does not change during the computations doing part of the work in serial in a preliminary phase gives me more
>>>>>    control (at least this is my perception).
>>>>> 
>>>>> So in general, we partition in serial (although we will do some repartitioning in parallel, soon; we are using now
>>>>> zoltan for repartition when we read some climate files)
>>>>> 
>>>>> 
>>>>>    Thanks again.
>>>>>    Bests.
>>>>>    Lorenzo
>>>> 
>>> 
>>> --
>>> ================================================================
>>> "You will keep in perfect peace him whose mind is
>>>  steadfast, because he trusts in you."               Isaiah 26:3
>>> 
>>>             Tim Tautges            Argonne National Laboratory
>>>         (tautges at mcs.anl.gov)      (telecommuting from UW-Madison)
>>> phone (gvoice): (608) 354-1459      1500 Engineering Dr.
>>>            fax: (608) 263-4499      Madison, WI 53706
>>> 
>> 
>> 
> 
> -- 
> ================================================================
> "You will keep in perfect peace him whose mind is
>  steadfast, because he trusts in you."               Isaiah 26:3
> 
>             Tim Tautges            Argonne National Laboratory
>         (tautges at mcs.anl.gov)      (telecommuting from UW-Madison)
> phone (gvoice): (608) 354-1459      1500 Engineering Dr.
>            fax: (608) 263-4499      Madison, WI 53706

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131030/9e1201c3/attachment.html>


More information about the moab-dev mailing list