In the descriptions below, I tried to group functionality with different levels as Carl, Onkar, and Tim did. In a few cases, I probably have put the functionality into the wrong level, but I don't think those mistakes are worth arguing about. I listed the inputs and outputs for each functionality; agreeing on that is more important than which box we put the functionality in. In many of the write-ups, full syntax was provided. I haven't thrown that away; in fact, we'll fill that in next. But so far, we haven't agreed on the functionality and rough list of functions needed; we need to do that first. For **each item** of functionality in this document, there will be a function (or set of functions) to provide the functionality. The implementation can decide whether these functions should be simple wrappers over existing data model functionality or more intrusive functions in the implementation. ------------------------------------------------ Background and Terminology: Each MPI process has one or more meshes; these meshes are accessed by one or more "mesh instances." Similarly, each MPI process may have one or more partitions; these partitions are accessed by one or more "partition instances." Following the iMesh convention, a partition instance is an "iPartition." Partitions have MPI Communicators associated with them. "Global" operations are "global" with respect to a partition's MPI Communicator. A partition subdivides one or more meshes. Partitions know which meshes they subdivide. A mesh can be subdivided by one or more partitions. Meshes know which partitions they are in. Partitions contain parts. Parts can be accessed by either a part ID (integer) or a part handle. Each part is wholly contained within an MPI process. An MPI process may have multiple parts. Objects in partitions can have "global" IDs ("global" with respect to a partition's MPI Communicator). Uniqueness of global IDs is managed at the partition level. Partitions subdivide a collection of objects; each object is an entity or group of entities. Each object is assigned to a part. Lower-dimension entities associated with each object are also given a part assignment induced by the partition in an implementation-determined fashion. Meshes contain entity sets; partitions contain parts. Many iMesh functions that accept a mesh and an entity set handle are also useful in the context of partitions and parts. These functions will be reinterpreted so that they can accept either a mesh and entity set handle, or a partition and a part handle. In his Oct 15 email ("Partitioned Meshes or How to Use Parts in the Current Serial Interface"), Carl outlined small changes or reinterpretations needed to support this capability. Since there was little disagreement with his write-up, we will accept it as is for the first draft of the parallel interface. There is one root set per mesh instance; that is, if there are multiple meshes, there are multiple root sets. For a mesh distributed among multiple MPI processes, there is one root set per mesh per process. (The write-up below doesn't depend on this assumption, but we need to agree on some definition of "root set" so we use it consistently.) I avoid using the term "iPart," as it is ambiguous whether an "iPart" is a partition or a part. I suggest we forget the term "iPart" altogether. ------------------- Partition: Functionality: - Create a partition: Given an array of mesh handles for meshes to be included in the partition and an MPI Communicator, return a partition instance. - Destroy a partition: Given a partition instance, destroy it and set its handle to NULL. - Given a partition, return its MPI Communicator. - Given a partition, return the mesh handles of meshes that are included in the partition. - Given a partition, return the total number of parts in the partition. - Given a partition, return the number of parts on each MPI process. - Map from parts to processes: + Given a partition and a part ID, return the MPI Rank of the process that owns the part. Note: Need single-part-ID and array-of-part-IDs versions of this function. - Map from processes to parts: + Given a partition and an MPI process rank, return the number of parts owned by the process. + Given a partition and a MPI process rank, return the part IDs owned by the process. Note: Need single-process and array-of-processes versions of this function. - Map from the MPI process (as determined by MPI_Comm_rank on the partition's communicator) to part handles. + Given a partition, return the number of on-process parts in the partition. + Given a partition, return all on-process part handles in the partition. + Part iterator: given a partition, iterate over the on-process parts in the partition. - Manage global IDs of objects that are partitioned; "global" here means with respect to the partition's communicator. + set/retrieve global IDs; generate global IDs; set/retrieve global ID size; map between entities and global IDs; compare global IDs. Since there was little disagreement with Vitus'/Karen's document on global IDs, we will use it in the first draft with modifications to associate the "global" nature of the IDs with partitions; these changes address some of Tim's concerns. We will keep the helper functions defined, rather than require the application to manage them with special tags. - Provide information about the partition that is "global" with respect to a given partition. Given a partition, return: + Total number of entities in the partition; + Total number of entity sets in the partition; + Total number of entities with given type, tag and/or tag name in the partition; + Total number of entity sets with given type, tag and/or tag name in the partition; + The number of entities in each part of the partition; + The number of entity sets in each part of the partition; + The number of entities with given type, tag, and/or tag name in each part of the partition; + The number of entity sets with given type, tag, and/or tag name in each part of the partition; + All tag names over the partition; + All tag names in each part of the partition; + All entities in this partition having a given type, tag and/or tag name. + All entity sets in this partition having a given type, tag and/or tag name. Note: these operations require communication. Note: these operations are analogous to the global operations on meshes defined below. Questions: - How should we expect applications to build partitions? Should they create a partition, add parts to it, and then populate the parts with entities? Is this analogous to how meshes are built (create a mesh, add a root set to it, then populate the root set)? - RPI's write-up included additional part IDs that were local to an MPI process. I did not include these in the functionality above. Do we really need them? Aren't part handles a sort of "local" ID? - RPI's write-up included a iterator over parts in a partition. I included it above, but Carl and Tim questioned whether it is needed given that (1) the number of part handles on a process would likely be small enough to fit in an array and (2) there is no analogous set iterator. - RPI's writeup included getNumPartsPerProcArr. I did not explicitly include this above, although it would be possible to obtain this info using an array version of "Map from processes to parts" above. Is this capability sufficient? Note that getNumPartsPerProcArr could require a very large amount of memory when the number of processes is large. - Do we need explicit management of parts' assignments to processes? (e.g., Carl's addPartOnProcess, rmvPartOnProcess). Or is existence of a part on a process equivalent to its being "assigned" to that process? - Currently, iMesh doesn't have the functionality to get entities or entity sets by type and tag in serial. Should it? If so, then we'll need it in parallel for partitions, too. - Why are the functions that return arrays of information with respect to each part needed? For runs on large numbers of processors, these arrays can be VERY big, and the memory use is not scalable. - Should we be concerned about functions that return all entities with given characteristics within a partition? The memory use of these functions can be LARGE. ------------------- Part Functionality: - Given a partition, create a part and add it to the partition on the MPI process invoking the creation. Return part handle. - Given a partition and a part handle, remove the part from the partition, destroy the part, and set the part handle to NULL. - Map between part handles and part IDs. + Given a partition and a part handle, return the part ID. + Given a partition and a part ID, return the part handle if on process; return error code otherwise. Note: Need single-part and array-of-parts versions of these functions. - Identify parts that neighbor a given part. + Given a partition and a part handle, return the number of parts in the partition that neighbor the given part. + Given a partition and a part handle, return the part IDs of all parts in the partition that neighbor the given part. Note: Need a precise definition of "neighbor." Note: Need single-part and array-of-parts versions of these functions. - Provide part boundary info: + Given a partition, a part handle, and a neighboring part ID, return the number of boundary entities on the part boundary shared with the neighboring part ID. + Given a partition, a part handle, and a neighboring part ID, return an array of entity handles for all entities along the part boundary shared with the neighboring part ID. + Boundary iterators: Given a partition, a part handle, and a neighboring part ID, return an iterator over all entities along the part boundary shared with the neighboring part ID. Note: Allow optional specification of desired entity types and topologies; allow neighboring part ID = -1 to count all qualifying boundary entities of part. - Provide entity information about a part; e.g., given a partition and a part handle, return the number of entities in the part, lists of entities in the part, etc. This functionality is largely accomplished by substituting the part handle for the entity set handle and the partition handle for the mesh handle in existing iMesh functions. See Carl's Oct 15 write-up reference above. - Provide part information about an entity: Given an entity and a partition, return the part ID of the part that owns the entity. Must return an error code if the entity is not in the partition (e.g., if the partition assigns surfaces to parts, it doesn't make sense to ask which part owns a given region). Note: Need single-entity and array-of-entities versions of this function. - Add/remove on-process entities to/from on-process part: Given a partition, an entity handle, and a part handle, add the entity to the part. This operation can be accomplished by functions that add entities to sets. - Add entities to on-process and/or off-process parts: This functionality could be implemented through a send/receive message pair: Input is a partition, a source part handle, an entity handle, a target part ID, and a command code (e.g., MIGRATE/COPY); e.g., + sendEntArrToParts(partition, source_part_handle, ent_handle_array, target_part_ID_array, MIGRATE) moves an entity to a new part; and sendEntArrToParts(source_part_handle, ent_handle_array, target_part_ID_array, COPY) copies an entity in a new part. + receiveEntArr: a receive that blocks until all expected data is received; also can update ghost/boundary/internal data as needed. Note: Some set-up communication may be needed for the receive to determine what messages it is waiting for. Note: Using a pair of calls allows at least some latency hiding. Questions: - Is there need for adding a part to a partition or removing it from a partition beyond creating and destroying? - The send/receive mechanism above exports entities into new parts. Do we also need a mechanism for requesting entities that a part wants to import? Is this import capability what Tim and Onkar meant with "getOwnerOfEnt, getCopiesOfEnt, getCopyOfEnt, getNumOfCopiesOfEnt" ------------------- Mesh: Functionality: - Given a mesh instance, return handles for all partitions that include the mesh. - Return information about the mesh that is "global" with respect to a given partition: Given a partition and a mesh, return: + Total (over the partition) number of entities in the mesh; + Total (over the partition) number of entity sets in the mesh; + Total (over the partition) number of entities with given type, tag and/or tag name in the mesh; + Total (over the partition) number of entity sets with given type, tag and/or tag name in the mesh; + The number of this mesh's entities in each part of the partition; + The number of this mesh's entity sets in each part of the partition; + The number of this mesh's entities with given type, tag, and/or tag name in each part of the partition; + The number of this mesh's entity sets with given type, tag, and/or tag name in each part of the partition; + All tag names (over the partition) for the mesh; + All tag names in each part of the partition for the mesh; + All entities (over the partition) in this mesh having a given type, tag and/or tag name. + All entity sets (over the partition) in this mesh having a given type, tag and/or tag name. Note: these operations all require communication. Note: these functions are analogous to the global partition operations above. Questions: - The global mesh functions above are analogous to the global partition functions. The global mesh functions return data for a particular mesh within a partition; the global partition functions return data for all meshes within a partition. Do we need/want both? If there is only one mesh, the two sets of functions are equivalent. - Currently, iMesh doesn't have the functionality to get entities or entity sets by type and tag in serial. Should it? If so, then we'll need it in parallel, too. - Why are the functions that return arrays of information with respect to each part needed? For runs on large numbers of processors, these arrays can be VERY big, and the memory use is not scalable. - Should we be concerned about functions that return all entities with given characteristics within a partition? The memory use of these functions can be LARGE. ------------------- Entity: Functionality: - Provide entity categorization within part (boundary, copy, owned, etc.): + Given a partition, a part handle, and an entity handle, return a flag indicating whether the entity is owned by the part or is a copy. + Given a partition, a part handle, and an entity handle, return a flag indicating whether the entity is strictly internal, on a boundary, or a ghost. Note: Need single-entity and array-of-entities versions of this function. Questions: - From Carl: "getTag*Operate: Again, we haven't got this in serial. Does the existence of such operations imply that we expect to implement fields as tags? (Because that wasn't what I was assuming about field implementations at all, personally...) Note that I'm not opposed to this sort of global reduction operation, I just wonder whether it'll see use outside of field-like situations. If not, then it should be in parallel fields, not parallel mesh, and usage for fields-implemented-as-tags should be handled there." % CVS File Information % $RCSfile: CombinedModelByKaren.txt,v $ % $Author: kddevin $ % $Date: 2007/11/09 23:00:38 $ % $Revision: 1.10 $