Mesh set question

Mark Shephard shephard at scorec.rpi.edu
Thu Oct 29 12:48:39 CDT 2009


I must admit I have gotten pretty lost in this chain of discussion. If 
this converges, I expect that there will need to be something that 
states what the conclusion is.

Tim Tautges wrote:
> For those not following the thread closely, I think we're converging, 
> right Carl?
> 
> Carl Ollivier-Gooch wrote:
>> Tim Tautges wrote:
>>>> One of the implications of what we're developing in itaps is that 
>>> multiple services may be applied to a given data base, and those 
>>> services may leave data and metadata around on the mesh.  We all 
>>> agree we should avoid the combinatorial problem of all services 
>>> needing to code to all implementations' native interfaces (thus the 
>>> common interface).  Getting multiple services working on a database, 
>>> though, also means that we can't require each service to understand 
>>> the metadata potentially left there by other services.  What we can 
>>> do is provide a way for the application, which often knows about all 
>>> the services it's calling, to specify the proper behavior when 
>>> handling this metadata, in the language or data model used by the 
>>> interfaces.
>>
>> This I definitely agree with, to the extent that it's possible to 
>> -have- proper behavior for the metadata.  Some of that behavior is 
>> still going to be expensive, though (example: entity handle tags are 
>> going to be difficult or impossible for a service to update properly).
>>
> 
> That'll always be the case when applications can embed information in 
> the data model.  I think this is better than the alternatives (changing 
> the interface for each new application, or not allowing apps to embed 
> data using the data model).
> 
>>> For example... I've written a copy/move/merge mesh capability in 
>>> MeshKit, to help mesh these large fuel pin assemblies and lattices of 
>>> assemblies.  Performing this operation on actual mesh entities is 
>>> trivial.  Properly handling metadata that comes with typical meshes 
>>> is not.  However, these behaviors can be abstracted and specified in 
>>> terms of our data model.  In the copy/move/merge case, I use the 
>>> notion of "copy" and "expand" set types; "copy" sets having entities 
>>> which get copied themselves get copied, and given the copies as 
>>> contents.  For expand sets, the new entities get added to those.  
>>> Criteria for determining whether a set is copy or expand can be 
>>> specified by passing the sets themselves or tag types (and optionally 
>>> values) which identify the sets.  Using this scheme, it's a very 
>>> simple matter to have my service do "the right thing" with material 
>>> sets (expand), neumann bcs (expand), geometric volumes (copy), parts 
>>> (copy), reactor assemblies (copy), and lots of other things.  This is 
>>> all without knowing the purpose of all these sets.
>>
>> So this is an example of using a tag convention as a way to 
>> communicate between an app and a service, with (as I understand it) 
>> all the work at the iMesh level being standard.  If my understanding 
>> is correct, this example would work with any compliant implementation 
>> as a back end, so long as the app and service know about the same tag 
>> convention.
>>
> 
> It's slightly easier than that: the convention is how the application 
> recognizes things like material sets and neumann bcs; communication 
> between the app and the copy/move service is in terms of data model 
> characteristics, and does not rely on conventions.  This example will 
> work with any compliant application; this hasn't been demonstrated, but 
> would be useful to demonstrate.
> 
>>> In summary: I don't believe it's possible to define the "right" 
>>> behavior for sets under mesh modification at the mesh interface 
>>> level, because the semantics of what the sets represent and how they 
>>> behave can vary infinitely with the application.
>>
>> That's true in general, but doesn't preclude the possibility of 
>> certain common behaviors that are (at least) worth creating documented 
>> tag conventions for, or more (since an implementation will have to be 
>> tricksy to take advantage of a tag convention).
>>
> 
> Yes, modulo what "or more" means (e.g. changing the interface is right 
> out).
> 
>>>> The semantics that make the most sense to me, at least at first 
>>>> glance, for modification are classification / reverse 
>>>> classification, which I (as a service writer) would run through iRel 
>>>> rather than looking at sets anyway.  And in any case, working on a 
>>>> classification basis will necessarily trash other sets.
>>>
>>> There are examples where you'll want to do this without iRel too.  
>>> For example, material types and boundary conditions.   This is the 
>>> point where Mark will say that he requires geometry to track these 
>>> too, but not the only way to do it. In fact, you can abstract this 
>>> such that the application calling the adapt service can specify rules 
>>> for modifying sets, without the adapt service needing to know the set 
>>> semantics, while also preserving the ability to relate to the 
>>> geometry subsequently.  That may not always be the right way to do 
>>> it, but I think it can also be said that there are non-trivial cases 
>>> where that is the right way to do things.
>>
>> But doesn't this require the ability to go from entity to set, to 
>> check whether the entities you're about to replace with a different 
>> collection of entities are in the same set (material type for regions, 
>> BC for bdry faces, etc), as well as some sort of tag convention to 
>> specify that these sets matter and those don't?  The latter is pretty 
>> easy, I suspect, but the former is not (interface functionality aside, 
>> it costs either time or space to implement...).  I agree that, if the 
>> modification app knows "replace only co-setted entities with entities 
>> that you also put in that set" then you don't lose mesh set->geom 
>> entity classification, or anything else you cared about with that set, 
>> but the current interface is awkward for this, at best.
> 
> In some cases, this does require going from entity to set with 
> reasonable speed.  Whether this is easy or not depends on the 
> implementation.  If the implementation is too slow doing it with the 
> current interface, another option is to implement it as a service on top 
> of the interface, with the application caching entity->set 
> relationships.  If tag retrieval has "reasonable speed", this will no 
> worse.  In either case, of course you will pay either memory or speed. 
> You'll pay similarly with any type of observer pattern.
> 
> Note that a key part of our interface that allows us to do this from the 
> service level is limiting ourselves to only primitive create/delete 
> operations on entities.  That is, entities don't get deleted under the 
> interface w/o the application having requested it.  Though, if the app 
> is a service, and there are other services depending on the mesh, then 
> chaos ensues.  There are ways to handle this, though, too.
> 
>>
>>> Again, I think we need to think more abstractly here.  Classification 
>>> and reverse classification are only one application of relations; 
>>> even calling something "reverse" is specific to the semantics of how 
>>> the mesh is generated and adapted.  Thinking of these things in terms 
>>> of generic sets greatly reduces the number of operations you have to 
>>> support to implement those semantics.
>>
>> Maybe.  I'd have to think about this a lot more to be sure one way or 
>> the other.  I suspect that some operations are really easy one way and 
>> hard the other, and vice versa.
>>
>>>> I stand by my statement.  The interaction of sets and tags with mesh 
>>>> modification is something that I don't recall ever discussing, 
>>>
>>> "ever" is a pretty strong statement.  I won't go trolling through old 
>>> emails, as I'm sorely tempted to do here.  But, I can say that I had 
>>> a hard enough time convincing people that sets were even needed or 
>>> even useful, so we may not have spent much time on the deeper 
>>> question about what to do with them under modification.
>>
>> I did say I didn't recall it; that's my protection if you go digging. 
>> ;-)  Certainly I missed the original set discussions.  But given that 
>> we didn't get iterators right under modification without changing them 
>> (at least?) once, it would be surprising if sets and tags are much 
>> better.
>>
> 
> I'm not saying the interface will never need changing, just that the 
> data model it's built on can support the things we're talking about, and 
> support it efficiently.  That also doesn't mean it's easy to support 
> efficiently on all types of implementations.
> 
>>> and our
>>>> current interface provides no way for a service that does mesh 
>>>> modification to correctly and efficiently handle sets that the 
>>>> application that called it thinks are important (even if given 
>>>> enough info to be correct, a service can't be efficient).  
>>>
>>> At best, that statement is way stronger than it should be ("no way"?  
>>> c'mon...).  First, you're arguing from the standpoint of a very 
>>> specific kind of mesh modification.
>>
>> True, but for the specific case of swapping, I can't even efficiently 
>> identify (i.e., without querying all sets) which sets an entity 
>> belongs to so that I can try to make the decision about what to do 
>> with the new entities.  Hence, my statement, which I probably should 
>> have qualified to say that it applies to at least some services, 
>> though not all.
>>
>> There may be an alternative way to implement swapping that doesn't 
>> have this problem, but I can't see it offhand.  Iterating over 
>> highest-dimensional entities in a set almost works, but you have to 
>> know which of its second-adjacent highest-dimensional entities are 
>> -also- in the set, which gets you back in trouble.  Plus, you examine 
>> all the swap candidates twice in an iteration over the set.
> 
> I think the way this needs to be done is to allow applications to tell 
> you which sets they want the swapping service to update.  After that, 
> the tracking is the cost of traversing the sets once, the storage of 
> entity->set membership, and the cost of querying those relationships as 
> you swap.  If your implementation already stores that information, then 
> the query of sets an entity is in is likely efficient, and you don't 
> need it.
> 
> Actually, this begs another question: should we have the equivalent of 
> an AdjacencyTable functionality, but for entities in sets?
> 
>>
>>>> The easiest fix is to document that the sets may be garbage on 
>>>> return and caveat app.  Intermediate would be to ignore sets in mesh 
>>>> modification, but to remove ents from sets if they get deleted 
>>>> (otherwise you can't re-use that memory without potentially putting 
>>>> an entity in the wrong set).
>>>>
>>>
>>> We can't specify these types of rules at the interface spec level, 
>>> because that will overconstrain some implementations and use cases.  
>>> For maximum interoperability, one could implement the service to 
>>> provide options for the application to select.  For example, an 
>>> option for whether the service deletes entities from sets before 
>>> deleting the entity itself, or not.  The first case would work better 
>>> on an implementation that was minimal and efficient in time and space 
>>> (e.g. MOAB under certain conditions).  The second would be better for 
>>> implementations that already had a rich observer capability (e.g. 
>>> GRUMMP, or MOAB under other conditions).
>>
>>  From an interoperability point of view, having the -service- provide 
>> options that cause it to interact differently with the implementation 
>> is very different from having different implementations do things that 
>> only they can do.  The former I have no issues with at all; the latter 
>> is a slippery slope away from where I believe we should be.  (Yes, I 
>> know, are an O(n) implementation and an O(n^2) implementation of the 
>> same capability interoperable?  Good question.  I'm glad you asked 
>> it.  Next question?)
> 
> The slippery slope argument is a good one, and we should be careful 
> about options in general.  Carl, I'm not sure you were around that 
> early, but I did vehemently resist the notion of options early on in 
> TSTT for things like identifying tags in tag get data functions (arguing 
> in favor of tag handles instead).  Also, I think it should be understood 
> that the options are recommendations only.
> 
> The argument for options is that they give both implementations and 
> applications flexibility.  In fact, why do we have options for load and 
> save?  As another example, the notion of dense vs. sparse tags seems to 
> be gaining momentum.  But, if we put that into the interface, I can bet 
> that will put a burden on many implementations, and some implementations 
> may not support it for a long time or ever (mesh shape interface, 
> anybody?)  Using options gives them the option.
> 
> One more note, I think we do need an additional error code, which is 
> really a warning, about unhandled options.  That's already turned up as 
> an issue in reactor simulation, where the application was passing an 
> option wrongly, and never knew.
> 
> - tim
> 
>>
>>> And c'mon, you made it that far in my message but passed up the 
>>> chance to ridicule my obscure Vonnegut reference? :)
>>
>> I'm not familiar with those particular Vonnegut pieces, and I couldn't 
>> muster any nuanced ridicule by that time anyway. ;-)
>>
>> Carl
>>
> 



More information about the tstt-interface mailing list