itaps-parallel iMeshP_syncPartitionAll, iMeshP_getNumGlobalParts, and communication
Vitus Leung
vjleung at sandia.gov
Wed Oct 15 11:51:37 CDT 2008
For parallel repartitioning, we almost have the first scenario. Right
now, only a single partition handle is used. For the second scenario,
could iMeshP_exchEntArrToPartsAll() also be enhanced to take an input
partition handle and an output partition handle.
Vitus
On Wed, 2008-10-15 at 10:26 -0600, Devine, Karen D wrote:
> OK; I think I understand. I hadn't considered the case where multiple
> partitions would be written into a file; I had assumed just one might be in
> the file.
>
> So, yes, we could modify iMeshP_load to accept an array of partition handles
> that would be filled by the load. Then the execution would look like this:
> User creates a mesh instance.
> User creates x partition handles, with associated communicators.
> User calls iMeshP_load, passing in the mesh instance, the array of
> partition handles, and some strings that help iMeshP_load know what to do
> (Ken and Tim are working on some interoperable strings for basic
> functionality). iMeshP_load fills all the partition handles, calls
> iMeshP_syncPartitionAll for each one, and returns them back to the
> application.
>
> We had agreed at the March bootcamp that some simple values (like number of
> parts in a partition) could be precomputed and stored with the partition.
> More complicated values (like the total number of tets in a partition) would
> require collective communication. Since iMeshP_getNumGlobalParts takes a
> partition handle as input, you would return the precomputed value for the
> input partition handle.
>
> For parallel repartitioning, I can see two scenarios:
> - User provides an input partition handle; iZoltan modifies its contents
> based on the result of a partitioning algorithm. This case would simply
> repartition an already distributed mesh.
> - User provides an input partition handle and an output partition handle;
> iZoltan fills the output partition handle based on the results of a
> partitioning algorithm. This case could be used to create "secondary"
> partitions.
>
> Karen
>
>
> On 10/15/08 10:12 AM, "Jason Kraftcheck" <kraftche at cae.wisc.edu> wrote:
>
> > Devine, Karen D wrote:
> >> My understanding of how one uses partition handles with iMeshP_load is as
> >> follows:
> >>
> >> User creates a mesh instance.
> >> User creates a partition handle, associating a communicator with it.
> >> User calls iMeshP_load with the mesh instance and partition handle. In
> >> addition to doing everything that iMesh_load did to fill the mesh instance,
> >> iMeshP_load constructs parts (either by reading part assignment from files
> >> or calling Zoltan), adds the parts to the partition, adds entities to the
> >> parts, etc., and calls iMeshP_syncPartitionAll. Thus, iMeshP_load returns a
> >> useable mesh instance and partition handle to the application. After that,
> >> calls to iMeshP_getNumParts do not invoke communication, as the number of
> >> parts is pre-computed in iMeshP_syncPartitionAll.
> >>
> >> Are you saying you would like iMeshP_load to accept and return multiple
> >> partition handles? I don't mind making that change, but we haven't yet
> >> worked out the details of having multiple partitions of a mesh (e.g., how to
> >> specify which partition is the "active" partition).
> >>
> >
> > No. I don't think it should need to return multiple partitions. The
> > iMeshP_getNumPartitions and iMeshP_getPartitions should be sufficient for
> > that. I'm more concerned with how to implement iMeshP_getNumGlobalParts if
> > there are more partitions than the "active" one. I had already assumed
> > there'd be multiple partitions as iMeshP_get(Num)Partitions is in the API.
> >
> >
> >> Sorry; I admit I do not understand your use case of doing partitioning in
> >> parallel. Whether we do partitioning serially or in parallel, we create
> >> generally create one partition with many parts. Please explain your use
> >> case more. Thanks!
> >>
> >
> > You know more about such things than I. I had assumed that the resulting
> > partitioning need not be related to the partition used to distribute the
> > mesh for the parallel partitioner.
> >
> > - jason
> >
> >
> >
>
>
>
>
More information about the itaps-parallel
mailing list