[MOAB-dev] loading a VTK mesh in parallel (.pvtu)
Lorenzo Alessio Botti
bottilorenzo at gmail.com
Wed Oct 30 11:36:30 CDT 2013
I see, thanks for sharing these infos.
I understand that in your opinion what I had planned to do is not useful, at least not for efficiency considerations, since the scaling in parallel is quite good.
Makes sense. I don’t want to reinvent the wheel.
That leaves me with the problem of dealing with periodic boundaries. How can I force entities on the domain boundaries to be shared across (possibly) different processors?
Is there a way to tag the boundary vertices with some periodic_global_id tag and then resolve share entities based on this tag?
Thanks for help.
Lorenzo
On 30 Oct 2013, at 17:07, Tim Tautges <tautges at mcs.anl.gov> wrote:
> The information compiled during resolve_shared_ents can't really be stored in files, because this information includes the handles a given entity is represented by on other procs, and that depends on load order and (for ghost entities) message arrival times. The resolution of shared vertices/non-vertices is done using global ids for the vertices, but not global ids for the entities (since those entities, like interior edges/faces, may not be explicitly represented in the mesh file). However, from our observations, resolve_shared_ents scales rather well, at least up to 16k procs and 32m elements (and probably up to 500k procs and 1b elems, based on some more recent timings). So, I don't really think you'll have a timing problem with this. The trouble is if you don't have a global id for vertices. In that case, you'll have to use the ParallelMergeMesh, as Iulian said. But even that scales pretty well (though we haven't measured perf on large #'s of procs, just out to maybe a few k procs).
>
> - tim
>
> On 10/30/2013 10:54 AM, Lorenzo Alessio Botti wrote:
>>>
>>> ------------------------------------------------------------------------------------------------------------------------
>>> it will be tough to do it this way. The tags you want to save are all starting with double underscore, (__), and they
>>> are not saved in a hdf5 file. My understanding is that you want to save them in your format (vtu?), each part in a
>>> different file.
>>
>> Yes, exactly what I’d like to do.
>>
>>> You will need to restore "MBEntityHandle" type flags, somehow. For example, a node that is shared between 2
>>> processors, each processor knows the handle on the other processor, in form of a tag.
>>>
>>
>> So I need to save for each local shared entity the entity handles of all the non local shared entities. This makes sense.
>>
>>> It will be hard to restore the handle on the other processor, from the information you save; You can search for
>>> global id, of course, but then why do it this way, if you can already do it by calling resolve shared ents? Do you
>>> want to rewrite all that logic? And replicate that for higher dimensions, edges, faces shared?
>>
>> I see, the problem is that I cannot know the entity handle before reading so I need to store the global id to then
>> obtain the entity handle.
>> And I also need to store the information regarding the shared processors in order to know where I have to search to
>> match the global ids.
>> Is there a way to ask other processes for all the entities with a specified tag and value? something like
>> get_entities_by_type_and_tag() that works in parallel?
>> I guess that this is the logic you were referring to.
>>
>>>
>>> Or maybe you can use a local index in each file; The tags you need to save are the 5 parallel tags.
>>
>> Do you mean the bits indicating the ParallelStauts?
>>
>>> Is the mesh structured? Do you know about ScdInterface? Maybe your mesh is not structured.
>>
>> Yes, my meshes are unstructured.
>>
>>>
>>> Exchange tags and reduce tags will need to know the handles of entities on the other processors, otherwise you cannot
>>> communicate.
>>>
>>> But maybe I don't understand the question :(
>>
>> I think you got the point and you already helped me clarifying what I actually need.
>> Thanks.
>> Lorenzo
>>
>>
>>>
>>>
>>>
>>> The reason for doing so is that in serial some situations are easier to manage, e.g. tag entities as shared on
>>> periodic boundaries and decide who they are going to communicate with.
>>>
>>>
>>> The possibility to resolve in parallel is great in case that the mesh is repartitioned in parallel but if the mesh
>>> does not change during the computations doing part of the work in serial in a preliminary phase gives me more
>>> control (at least this is my perception).
>>>
>>> So in general, we partition in serial (although we will do some repartitioning in parallel, soon; we are using now
>>> zoltan for repartition when we read some climate files)
>>>
>>>
>>> Thanks again.
>>> Bests.
>>> Lorenzo
>>
>
> --
> ================================================================
> "You will keep in perfect peace him whose mind is
> steadfast, because he trusts in you." Isaiah 26:3
>
> Tim Tautges Argonne National Laboratory
> (tautges at mcs.anl.gov) (telecommuting from UW-Madison)
> phone (gvoice): (608) 354-1459 1500 Engineering Dr.
> fax: (608) 263-4499 Madison, WI 53706
>
More information about the moab-dev
mailing list