itaps-parallel Notes from today's meeting
tautges at mcs.anl.gov
tautges at mcs.anl.gov
Mon Apr 19 09:06:42 CDT 2010
- tim
On Apr 19, 2010, at 2:17 AM, Carl Ollivier-Gooch <cfog at mech.ubc.ca>
wrote:
> Tim Tautges wrote:
>
>> I think Jason's argument is important here: the easy things should
>> be easy, and more complicated things possible. I think this is
>> related to Seegyoung's point about having a single API. I think
>> what this boils down to is an assertion that we should think
>> primarily in terms of one iMesh Instance per process. For those
>> applications wanting to use multiple instances, that implies
>> communication between the instances at least conceptually, if not
>> in the implementation also. What this boils down to even further
>> is that Part and iMesh instance are synonymous. The issue of an
>> entity assigned to exactly one Part morphs to one of representation
>> vs. ownership: an entity can be represented in more than one
>> instance (e.g. faces on part boundaries, in an element-based
>> partition), but can be owned by only one of those instances.
>
> Personally, I'm very much in favor of the part = instance
> interpretation, for reasons already outlined.
>
> In addition, I can easily see how we'll be able to implement this,
> whereas with multiple parts per instance, the easiest approach would
> be for us, effectively, to have multiple of our current instances
> inside a wrapper; yuck!
>
> At a higher level, this also makes the whole parts-as-special-sets
> argument moot, saving us from possibly re-visiting that. If this is
> the way we decide to go, we'll want to do an audit of iMeshP
> functions, because part handles will now be synonymous with iMesh
> instance handles. I suspect that a whole lot of functions either go
> away or get simpler. This is not a bad thing.
There will still need to be a way to relate parts to sets, or to tags,
or something similar, to be able to partition a mesh in serial (or in
parallel on a different no of procs). But, if you support iMesh
already, this shouldn't be difficult.
>
>> One of the reasons we initially thought in terms of multiple parts
>> per process was to handle over-partitionings, where a process could
>> be assigned multiple parts, e.g. for better load balancing, and
>> flexibility in what's stored with a mesh or how it's stored. I
>> think that concept is an artifact of how a partitioning is stored
>> with the mesh (or how the mesh is stored in files to reflect a
>> partitioning). From an application's point of view, in most cases
>> it's still going to want to access the mesh from a single instance
>> on a given process, no matter how that mesh was loaded initially.
>> That's also how the mesh looks from most applications' points of
>> view. The use case of multiple parts, with shared interface
>> entities duplicated, is mostly handled by having separate entities
>> in the database, with special relations between them implying that
>> they're the same.
>
> I can see ways around the over-partitioning problem, though. Most
> obviously, simply have multiple threads per processor, so that the
> number of processes still matches the number of parts. Or an app or
> implementation (or even the interface) could specify how to merge
> parts to get the right number at run time, as Tim hints at.
>
I don't think the former would be an acceptable solution, since
conceptually that would still require communication betw the instances.
> Eventually, in the hyperparallel world that's coming, apps and
> implementations may both have to be written to be multithreaded
> shared-memory processes within a node (OpenMP?) with communication
> between nodes (MPI?). This isn't in conflict with the notion of
> exactly one part (and iMesh instance) per process; it just requires
> more careful programming of apps/services that use the interface.
Imeshp is written in terms of message passing anyway, so it'll need to
change to handle threads anyway.
This is a good time to mention another application of multiple
partitionings, that I've encountered in two unrelated radiation
transfer apps now. The first partiioning is the normal domain
decomposition over procs in one communicator. The other is a broadcast
of that whole (subdomain) mesh over a smaller set of procs. This
second is the discretization of the energy domain over a spatial
subdomain. In the app, you reduce across energy groups to get element-
wise power, then across subdomains to output or reduce to vet total
power.
This leads me to think that a part isn't synonymous wih an instance,
but with a communicator (or, more precisely, a commmunicator is always
associated with a part, wih the part possibly shared betw comm's).
Thoughts?
Yikes, later parts of Carl's message disappeared, darn iPhone. My only
remaining point is that accessing ent on a set level (that Carl
implied was a complication) is no different than accessing them for
the root set; maybe filtering results by part is though.
- tim
>
>> One of the complications that arises from having one instance per
>> process (and one part per instance) is how do you repartition, or
>> partition in serial. I think that concept is handled by the notion
>> of initializing a given parallel mesh either a) from a collection
>> of collections, or a partition of parts, stored with and loaded
>> with the mesh, or b) communication from one instance to another,
>> using parallel migration of entities. Of course, a) is easily
>> handled using sets, if you have them; if you don't, it's just as
>> easily handled using multiple files. It also naturally handles
>> establishing a parallel mesh based on one of several
>> "partitionings" (collection of collections) stored with a mesh,
>> e.g. using the geometric volumes, material types, or true
>> partitions (generated by a partitioner).
>
> This is a problem we always had, though, didn't we? I mean, with
> multiple instances per process, repartitioning to use a new instance
> would require copying the whole mesh, including adjacency, sets,
> tags, etc. Do-able, but non-trivial, especially (say) tags with
> entity handle value.
>
> I don't have any conceptual issues in principle with the notion of
> multiple partitionings of the same entities. How an implementation
> (or app) chooses to store the inactive partition is obviously going
> to be implementation-dependent (sets vs tags vs classification
> vs ???). This potentially makes it tricky to define an API for
> switching between them.
>
> Also, what about the different, more subtle case where we -need- two
> partitions active at once? One use case is a contact problem, where
> the 3D mesh and 2D surface are partitioned separately (I'm assuming
> this is done, to get better load balance?). If you're also doing
> mesh adaptation as you go (likely), now those 2D faces on the
> interface have owners in the 3D partition (okay, some implementation
> don't represent them explicitly, but it's safe to say that a face
> interior to a part is going to be owned by that part...). I see two
> alternatives.
>
> 1. The easiest way to handle this in the current interfaces, as far
> as parallel stuff goes, is to have two separate meshes (a 3D mesh
> and a 2D manifold mesh), with tags or iRel (or something similar)
> keeping the respective entities associated with each other. This
> implies two iMesh instances per process, one each for the 3D and 2D
> manifold meshes, which isn't in conflict with other stuff in this
> thread.
>
> 2. You could also have two partitions active within a single
> instance. I'm not quite sure how this would look, even
> conceptually. You'd have the full 3D mesh for the (3D) part, plus a
> bunch of extra faces (and verts, at least) for the bits of the 2D
> manifold mesh that isn't already resident. The problem with this is
> that you've got to strictly segregate things so that iteration over
> (say) faces gets you what you wanted. Yes, I know, sets will do
> this, but it seems cumbersome...
>
> I can't say I'm crazy about either of these alternatives, nor am I
> invested in either one. Hopefully, someone can come up with
> something better. For what's it's worth, I think this problem is
> reasonably (completely?) independent of whether in "normal" single
> mesh contexts we require exactly one part and iMesh instance per
> process.
>
> Having said that, I'll almost certainly be missing tomorrow's
> telecon. I will, of course, be happy to kibitz afterwards by
> email. :-)
>
> Carl
>
> --
> ---
> ---------------------------------------------------------------------
> Dr. Carl Ollivier-Gooch, P.Eng. Voice: +1-604-822-1854
> Associate Professor Fax: +1-604-822-2403
> Department of Mechanical Engineering email: cfog at mech.ubc.ca
> University of British Columbia http://www.mech.ubc.ca/~cfog
> Vancouver, BC V6T 1Z4 http://tetra.mech.ubc.ca/ANSLab/
> ---
> ---------------------------------------------------------------------
>
More information about the itaps-parallel
mailing list