itaps-parallel Micro-migration update
txie at scorec.rpi.edu
txie at scorec.rpi.edu
Thu Mar 6 12:52:37 CST 2008
Carl,
I have some comments on your last email. Please let me know if I made any
mistakes or you have any questions. Thanks.
> A. Request in-migration of an entity (this is a pull migration). This
entity must be on the part bdry and is identified by local handle, and the
implementation handles the rest. If include_upward_adj is true, then
stuff on the remote part also gets migrates (-all-
> higher-dimensional entities). This operation will require multiple
rounds of communication, and at some times certain entities may be locked
(unavailable for local modification) while info about their remote copies
is still in question.
>
> void prefix_migrateEntity(iMesh_Instance instance,
> const prefix_PartitionHandle partition_handle,
> const entity_handle local_entity_handle,
> bool include_upward_adj, int *err);
I have questions: why do we need include_upward_adj argument? Why do we
want to just migrate an individual lower-order mesh entity?
When you say -all-higher-dimensional entities, do you mean that all the
(partition) objects adjacent to that entity on the part boundary, and all
lower-order mesh entities bounding the objects?
PS: since C has no keyword bool, we may use 'int' instead of 'bool'.
And we may use iBase_entity_handle to replace entity_handle.
>
> B. Update vertex coordinates. One could argue that we could overload
the setVtxCoords function to do this, and maybe we should. But that
obfuscates when communication could occur. The communication here is
push-and-forget.
>
> void prefix_updateVtxCoords(iMesh_Instance instance,
> const prefix_PartitionHandle partition_handle,
> const entity_handle local_vertex_handle,
> int *err);
>
> C. Poll for messages. The internals of this function are going to have
to cover a lot of ground. The array in the return is there as a
placeholder to tell the application that something interesting / useful
has been done to a handle. This might indicate successful in-migration, a
recent change in vertex location, or successful completion of handle
matching.
>
> void prefix_pollForRequests(iMesh_Instance instance,
> const prefix_PartitionHandle partition_handle,
> entity_handle **handles_available,
> int *handles_allocated,
> int *handles_size,
> int *err);
Question: I am wondering how to get entity_handles during polling the
messages.
This function seems like MPI_Probe.
int MPI_Probe( int source, int tag, MPI_Comm comm, MPI_Status *status ) So
I suggest as follows:
int prefix_pollForRequests(iMesh_Instance instance,
/*in*/ const prefix_PartitionHandle partition_handle,
/*in*/ prefix_PartHandle* source_part_handles,
/*in*/ int source_part_handles_size,
/*in*/ int *message_tags,
/*in*/ int message_tags_size,
/*inout*/ int **flags,
/*inout*/ int *flags_size,
int* err);
> D. Done with micro-migration. This is a blocking call, to get
> everything up-to-date and back in synch. Essentially, waits for all
message traffic to clear, as well as (possibly) rebuilding a bunch of
ghost info that was allowed to go obsolete.
>
> void prefix_synchParts(iMesh_Instance instance,
> const prefix_PartitionHandle partition_handle,
> int *err);
>
The last question: If we agree that micro-migration needs a lot of rounds
of communications, then how about macro-migration (the term may be not
proper): sending and receiving entity array? It needs more rounds of
communications.
For example, when the destination part receives entity array from another
part, it creates new mesh entities, and then it needs to "send back" some
new entities created, finally allowing all copies on the part boundaries
know other remote copies. To finish all migration work, maybe 2 functions
sending and receiving entity array are not enough.
Regards,
Ting
More information about the itaps-parallel
mailing list