[MOAB-dev] loading a VTK mesh in parallel (.pvtu)

Iulian Grindeanu iulian at mcs.anl.gov
Wed Oct 30 10:01:22 CDT 2013


Dear Lorenzo, 

----- Original Message -----




<blockquote>


You cannot call resolve_shared_entities unless you have a tag like a global id tag, or a file id tag that uniquely identifies every node. 
You need to use ParallelMerge; 
in there, assuming that the meshes are matching between processes, within some tolerance, you just need to run merge. 

Basically, a skin is computed first on each processor, and the skin is "resolved" in parallel using a 2 way communication, similar to resolve shared entities. 

( each vertex on the skin is first sent to a "working processor" , established based on a pre-determined pattern; the working processor is guaranteed to receive all instances of the vertex on all processes; it decides then what vertices need to be "merged" because they are identical within some geometric tolerance; that information is then communicated back to all processes that sent that vertex) 





Dear Iulian, 
thanks for the reply. 
ParallelMerge might be useful in many situations and I was not aware of it, so thanks for the hint. 
However, I actually have available global node ids, and therefore I'm able to resolve shared and exchange ghost entities in parallel. This is working quite well. 
What I want to achieve is do the work is serial (e.g. in the partitioning phase) instead of doing it parallel, then store the informations on the mesh files and finally load the files in parallel reading all the data I need. 
I’d like to know which informations I need to store on the files in order to be able to exchange tags across the processes once the mesh is loaded in a distributed way. 
For example I guess I need to store informations regarding the processors sharing the entities and the parallel status of each entity. 
</blockquote>
it will be tough to do it this way. The tags you want to save are all starting with double underscore, (__), and they are not saved in a hdf5 file. My understanding is that you want to save them in your format (vtu?), each part in a different file. 
You will need to restore "MBEntityHandle" type flags, somehow. For example, a node that is shared between 2 processors, each processor knows the handle on the other processor, in form of a tag. 

It will be hard to restore the handle on the other processor, from the information you save; You can search for global id, of course, but then why do it this way, if you can already do it by calling resolve shared ents? Do you want to rewrite all that logic? And replicate that for higher dimensions, edges, faces shared? 

Or maybe you can use a local index in each file; The tags you need to save are the 5 parallel tags. 

Exchange tags and reduce tags will need to know the handles of entities on the other processors, otherwise you cannot communicate. 

But maybe I don't understand the question :( 



<blockquote>





The reason for doing so is that in serial some situations are easier to manage, e.g. tag entities as shared on periodic boundaries and decide who they are going to communicate with. 
</blockquote>
Is the mesh structured? Do you know about ScdInterface? Maybe your mesh is not structured. 

<blockquote>



The possibility to resolve in parallel is great in case that the mesh is repartitioned in parallel but if the mesh does not change during the computations doing part of the work in serial in a preliminary phase gives me more control (at least this is my perception). 
</blockquote>
So in general, we partition in serial (although we will do some repartitioning in parallel, soon; we are using now zoltan for repartition when we read some climate files) 

<blockquote>





Thanks again. 
Bests. 
Lorenzo 
</blockquote>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/moab-dev/attachments/20131030/67cca193/attachment.html>


More information about the moab-dev mailing list