iMesh, iMeshP Use Cases The use cases below describe the PDE to be solved, and any application-imposed constraints on the parallel solution approach. These are meant to illustrate the use of the iMesh and iMeshP interfaces to solve these problems. For a given use case, describe how the constructs proposed for iMesh and iMeshP (e.g. Process, iMesh/iMeshP Instances, iMesh/iMeshP API functions, Partition, Part) will be used to solve the problem. While not absolutely required, this will generally require desription of: - the basic approach to distributing the domain and computations for the domain across Processes - what kind of communications are done in general, and how those interact with iMeshP constructs - the basic steps of the computational or time step kernel, and how they interact with iMeshP/iMesh. ===================== Use case 1: Solve a discretized PDE with FEM using spatial domain decomposition --------------------- Problem statement Solve a function dF/dx = f(x) on a spatial domain with prescribed boundary conditions. =============================== Use case 2: Radiation transport ------------------------------- Problem Statement In this use case, the radiation flux phi(x,o,E,t) is computed over a finite element grid, with flux values stored at vertices. The dependent variables are x (3 spatial dimensions), o (angle, two dimensions), E (energy), and t (time). In a very simplified form, assuming a single energy, the implicit formulation of this equation can be written as: d/d(x,t) phi(x,o,t1) = S(x,t) + phis(x,t0)) where phis(x,t) = sum_o{w_o phi(x,o,t)} is the scalar flux, a weighted sum of angular fluxes phi(x,o,t), and S(x,t) is a radiation source at x and t. The discretized problem is partitioned into S spatial domains {si} and O angular domains {oj}. A given time step solution consists of two parts. First, for each angular subdomain oj, the problem is solved over the spatial subdomains si using a domain decomposition PDE solution method; the result is phi(v(si),oj,t0), the radiation flux for a given angular direction oj at each vertex si. Second, the angular flux is converted to a scalar phis(v(si),t0), by computing a weighted sum over angular subdomains of the flux at each vertex, phi(v(si),oj,t0). The scalar flux is used in computing the angle-dependent flux phi(v(si),t1) on the next iteration. Note, this problem is grossly over-simplified, compared to how it is typically solved. In typical solutions, the problem is solved with nested iterations, with an inner iteration over angle o accounting for scattering into and out of a given angle, and an outer iteration over energy E, accounting for energy- and flux-dependent sources/sinks (e.g. scattering and fission). =============================== Use case 3: Structural dynamics with parallel contact detection [Based on S. Plimpton et. al, "Parallel Transient Dynamics Simulations: Algorithms for Contact Detection and Smoothed Particle Hydrodynamics", J. Par Dist Comp, 50, 104-122 (1998)] ------------------------------- Problem Statement Solve a structural dynamics problem: d/dt^2 x(x,t) = f(x,t) over a volumetric domain V with multiple connected sets V_k, handling cases where connected sets come into contact and exert force on each other when their boundaries collide. Solve each time step in two phases. In the first phase, solve for new positions x(x,t) using a standard (spatial) domain decomposition FEM, with each process responsible for computing behavior of a spatial subdomain. In the second phase, for all nodes on faces bounding single volume elements (i.e. not contiguously connected to two volume elements), given the length of the next timestep t, compute location and time of collision of the node with any other boundary face. Solve this second phase by assigning boundary vertices to processes based on an RCB algorithm, and copying connected boundary faces to any processes owning one of their vertices. Note in the second phase, certain restrictions must be place on the minimum size of an RCB cell, such that a boundary face does not geometrically intersect the RCB cell without having one of its vertices in that cell. No restrictions should be placed on which process is assigned a volume element and any of its connected boundary faces; that is, a volume and its connected boundary faces are not required to be solved by the same process. =============================== Use case 4: Parallel repartitioning ------------------------------- Problem Statement Load a mesh into a set of N Processes, using iMeshP_loadAll. Call a parallel partitioner, e.g. Zoltan, to compute a new partition for M processes where M > N. Use those results to migrate in-situ (i.e. without going through a disk write/read) to a new Partition with M Processes. =============================== Use case 5: Parallel mesh generation (using serial meshing library) ------------------------------- Problem Statement Start with a geometric model represented in parallel, such that for every entity you know the owner Process and handle on that Process, and on the owner Process all sharing Processes and handles of the entities on those Processes are known. The model is accessed through iGeom, and (for the parallel data) through some interface iGeomP able to provide the required parallel information. The general task is to generate a mesh for this model, using a domain decomposition-like approach, as follows. Starting with vertices d=0 and working upward in topological dimension d: a) generate mesh for each owned model entity of dimension d, keeping the mesh on bounding entities of dimension d' < d fixed. b) communicate mesh entities resolving a given model entity to all Processes sharing the model entity, such that copies know the owning Process/handle, and owner know all handles/Processes. Mesh generation algorithms should only use iMesh interface functions, to maximize the pool of algorithm implementations that can be used. =============================== Use case 6: Multiple Parts per Process