<html><head><meta http-equiv="Content-Type" content="text/html charset=windows-1252"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;"><div>Ok, got it. Tim and Iulian, thanks a lot for help. </div><div>I’ll let you know as soon as I have a version of the vtu reader/writer which is suitable to be included into moab </div><div>(hopefully I’ll be able to find a suitable xml reader since I discovered that the one I’m using right now is not fully open).</div><div><br></div><div>Thanks again.</div><div>Lorenzo </div><div><br></div><div><br></div><div> </div><div><br></div><br><div><div>On 30 Oct 2013, at 17:47, Tim Tautges <<a href="mailto:tautges@mcs.anl.gov">tautges@mcs.anl.gov</a>> wrote:</div><br class="Apple-interchange-newline"><blockquote type="cite"><div style="font-size: 12px; font-style: normal; font-variant: normal; font-weight: normal; letter-spacing: normal; line-height: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-text-stroke-width: 0px;"><br><br>On 10/30/2013 11:36 AM, Lorenzo Alessio Botti wrote:<br><blockquote type="cite">I see, thanks for sharing these infos.<br>I understand that in your opinion what I had planned to do is not useful, at least not for efficiency considerations, since the scaling in parallel is quite good.<br>Makes sense. I don’t want to reinvent the wheel.<br><br>That leaves me with the problem of dealing with periodic boundaries. How can I force entities on the domain boundaries to be shared across (possibly) different processors?<br>Is there a way to tag the boundary vertices with some periodic_global_id tag and then resolve share entities based on this tag?<br><br></blockquote><br>If you give them the same global id, they'll be tied together in parallel as if they were the same vertex. If they're both on the same poc, though, they'll remain distinct. Also, if you output the mesh to an .h5m file after resolving, they'll be represented as the same vertex in the file, which might not be what you want. Just giving you a heads-up about the behavior. We've run into some tricky behavior that way with structured meshes on the globe (they're horizontally periodic, and sometimes a given proc has a whole horizontal slice, and sometimes not, so we have separate notions of "locally" and "globally" periodic).<br><br>- tim<br><br><blockquote type="cite">Thanks for help.<br>Lorenzo<br><br><br><br><br><br>On 30 Oct 2013, at 17:07, Tim Tautges <<a href="mailto:tautges@mcs.anl.gov">tautges@mcs.anl.gov</a>> wrote:<br><br><blockquote type="cite">The information compiled during resolve_shared_ents can't really be stored in files, because this information includes the handles a given entity is represented by on other procs, and that depends on load order and (for ghost entities) message arrival times. The resolution of shared vertices/non-vertices is done using global ids for the vertices, but not global ids for the entities (since those entities, like interior edges/faces, may not be explicitly represented in the mesh file). However, from our observations, resolve_shared_ents scales rather well, at least up to 16k procs and 32m elements (and probably up to 500k procs and 1b elems, based on some more recent timings). So, I don't really think you'll have a timing problem with this. The trouble is if you don't have a global id for vertices. In that case, you'll have to use the ParallelMergeMesh, as Iulian said. But even that scales pretty well (though we haven't measured perf on large #'s of procs, just out to m<br></blockquote></blockquote>aybe a fe<br>w k procs).<br><blockquote type="cite"><blockquote type="cite"><br>- tim<br><br>On 10/30/2013 10:54 AM, Lorenzo Alessio Botti wrote:<br><blockquote type="cite"><blockquote type="cite"><br>------------------------------------------------------------------------------------------------------------------------<br>it will be tough to do it this way. The tags you want to save are all starting with double underscore, (__), and they<br>are not saved in a hdf5 file. My understanding is that you want to save them in your format (vtu?), each part in a<br>different file.<br></blockquote><br>Yes, exactly what I’d like to do.<br><br><blockquote type="cite">You will need to restore "MBEntityHandle" type flags, somehow. For example, a node that is shared between 2<br>processors, each processor knows the handle on the other processor, in form of a tag.<br><br></blockquote><br>So I need to save for each local shared entity the entity handles of all the non local shared entities. This makes sense.<br><br><blockquote type="cite">It will be hard to restore the handle on the other processor, from the information you save; You can search for<br>global id, of course, but then why do it this way, if you can already do it by calling resolve shared ents? Do you<br>want to rewrite all that logic? And replicate that for higher dimensions, edges, faces shared?<br></blockquote><br>I see, the problem is that I cannot know the entity handle before reading so I need to store the global id to then<br>obtain the entity handle.<br>And I also need to store the information regarding the shared processors in order to know where I have to search to<br>match the global ids.<br>Is there a way to ask other processes for all the entities with a specified tag and value? something like<br>get_entities_by_type_and_tag() that works in parallel?<br>I guess that this is the logic you were referring to.<br><br><blockquote type="cite"><br>Or maybe you can use a local index in each file; The tags you need to save are the 5 parallel tags.<br></blockquote><br>Do you mean the bits indicating the ParallelStauts?<br><br><blockquote type="cite">Is the mesh structured? Do you know about ScdInterface? Maybe your mesh is not structured.<br></blockquote><br>Yes, my meshes are unstructured.<br><br><blockquote type="cite"><br>Exchange tags and reduce tags will need to know the handles of entities on the other processors, otherwise you cannot<br>communicate.<br><br>But maybe I don't understand the question :(<br></blockquote><br>I think you got the point and you already helped me clarifying what I actually need.<br>Thanks.<br>Lorenzo<br><br><br><blockquote type="cite"><br><br><br> The reason for doing so is that in serial some situations are easier to manage, e.g. tag entities as shared on<br> periodic boundaries and decide who they are going to communicate with.<br><br><br> The possibility to resolve in parallel is great in case that the mesh is repartitioned in parallel but if the mesh<br> does not change during the computations doing part of the work in serial in a preliminary phase gives me more<br> control (at least this is my perception).<br><br>So in general, we partition in serial (although we will do some repartitioning in parallel, soon; we are using now<br>zoltan for repartition when we read some climate files)<br><br><br> Thanks again.<br> Bests.<br> Lorenzo<br></blockquote><br></blockquote><br>--<br>================================================================<br>"You will keep in perfect peace him whose mind is<br> steadfast, because he trusts in you." Isaiah 26:3<br><br> Tim Tautges Argonne National Laboratory<br> (<a href="mailto:tautges@mcs.anl.gov">tautges@mcs.anl.gov</a>) (telecommuting from UW-Madison)<br>phone (gvoice): (608) 354-1459 1500 Engineering Dr.<br> fax: (608) 263-4499 Madison, WI 53706<br><br></blockquote><br><br></blockquote><br>--<span class="Apple-converted-space"> </span><br>================================================================<br>"You will keep in perfect peace him whose mind is<br> steadfast, because he trusts in you." Isaiah 26:3<br><br> Tim Tautges Argonne National Laboratory<br> (<a href="mailto:tautges@mcs.anl.gov">tautges@mcs.anl.gov</a>) (telecommuting from UW-Madison)<br>phone (gvoice): (608) 354-1459 1500 Engineering Dr.<br> fax: (608) 263-4499 Madison, WI 53706</div></blockquote></div><br></body></html>