<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: times new roman,new york,times,serif; font-size: 12pt; color: #000000'><br><br><hr id="zwchr"><blockquote id="DWT1005" style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;">I see, thanks for sharing these infos. <br>I understand that in your opinion what I had planned to do is not useful, at least not for efficiency considerations, since the scaling in parallel is quite good.<br>Makes sense. I don’t want to reinvent the wheel. <br><br>That leaves me with the problem of dealing with periodic boundaries. How can I force entities on the domain boundaries to be shared across (possibly) different processors?<br>Is there a way to tag the boundary vertices with some periodic_global_id tag and then resolve share entities based on this tag?<br><br></blockquote>You do not have to do anything extra. As long the global ID is the same, the periodic vertices will be resolved correctly.<br><blockquote style="border-left:2px solid #1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;">Thanks for help.<br>Lorenzo<br><br><br><br><br><br>On 30 Oct 2013, at 17:07, Tim Tautges <tautges@mcs.anl.gov> wrote:<br><br>> The information compiled during resolve_shared_ents can't really be stored in files, because this information includes the handles a given entity is represented by on other procs, and that depends on load order and (for ghost entities) message arrival times. The resolution of shared vertices/non-vertices is done using global ids for the vertices, but not global ids for the entities (since those entities, like interior edges/faces, may not be explicitly represented in the mesh file). However, from our observations, resolve_shared_ents scales rather well, at least up to 16k procs and 32m elements (and probably up to 500k procs and 1b elems, based on some more recent timings). So, I don't really think you'll have a timing problem with this. The trouble is if you don't have a global id for vertices. In that case, you'll have to use the ParallelMergeMesh, as Iulian said. But even that scales pretty well (though we haven't measured perf on large #'s of procs, just out to maybe a few k procs).<br>> <br>> - tim<br>> <br>> On 10/30/2013 10:54 AM, Lorenzo Alessio Botti wrote:<br>>>> <br>>>> ------------------------------------------------------------------------------------------------------------------------<br>>>> it will be tough to do it this way. The tags you want to save are all starting with double underscore, (__), and they<br>>>> are not saved in a hdf5 file. My understanding is that you want to save them in your format (vtu?), each part in a<br>>>> different file.<br>>> <br>>> Yes, exactly what I’d like to do.<br>>> <br>>>> You will need to restore "MBEntityHandle" type flags, somehow. For example, a node that is shared between 2<br>>>> processors, each processor knows the handle on the other processor, in form of a tag.<br>>>> <br>>> <br>>> So I need to save for each local shared entity the entity handles of all the non local shared entities. This makes sense.<br>>> <br>>>> It will be hard to restore the handle on the other processor, from the information you save; You can search for<br>>>> global id, of course, but then why do it this way, if you can already do it by calling resolve shared ents? Do you<br>>>> want to rewrite all that logic? And replicate that for higher dimensions, edges, faces shared?<br>>> <br>>> I see, the problem is that I cannot know the entity handle before reading so I need to store the global id to then<br>>> obtain the entity handle.<br>>> And I also need to store the information regarding the shared processors in order to know where I have to search to<br>>> match the global ids.<br>>> Is there a way to ask other processes for all the entities with a specified tag and value? something like<br>>> get_entities_by_type_and_tag() that works in parallel?<br>>> I guess that this is the logic you were referring to.<br>>> <br>>>> <br>>>> Or maybe you can use a local index in each file; The tags you need to save are the 5 parallel tags.<br>>> <br>>> Do you mean the bits indicating the ParallelStauts?<br>>> <br>>>> Is the mesh structured? Do you know about ScdInterface? Maybe your mesh is not structured.<br>>> <br>>> Yes, my meshes are unstructured.<br>>> <br>>>> <br>>>> Exchange tags and reduce tags will need to know the handles of entities on the other processors, otherwise you cannot<br>>>> communicate.<br>>>> <br>>>> But maybe I don't understand the question :(<br>>> <br>>> I think you got the point and you already helped me clarifying what I actually need.<br>>> Thanks.<br>>> Lorenzo<br>>> <br>>> <br>>>> <br>>>> <br>>>> <br>>>> The reason for doing so is that in serial some situations are easier to manage, e.g. tag entities as shared on<br>>>> periodic boundaries and decide who they are going to communicate with.<br>>>> <br>>>> <br>>>> The possibility to resolve in parallel is great in case that the mesh is repartitioned in parallel but if the mesh<br>>>> does not change during the computations doing part of the work in serial in a preliminary phase gives me more<br>>>> control (at least this is my perception).<br>>>> <br>>>> So in general, we partition in serial (although we will do some repartitioning in parallel, soon; we are using now<br>>>> zoltan for repartition when we read some climate files)<br>>>> <br>>>> <br>>>> Thanks again.<br>>>> Bests.<br>>>> Lorenzo<br>>> <br>> <br>> -- <br>> ================================================================<br>> "You will keep in perfect peace him whose mind is<br>> steadfast, because he trusts in you." Isaiah 26:3<br>> <br>> Tim Tautges Argonne National Laboratory<br>> (tautges@mcs.anl.gov) (telecommuting from UW-Madison)<br>> phone (gvoice): (608) 354-1459 1500 Engineering Dr.<br>> fax: (608) 263-4499 Madison, WI 53706<br>> <br><br></blockquote><br></div></body></html>