itaps-parallel iMeshP_syncPartitionAll, iMeshP_getNumGlobalParts, and communication

txie at scorec.rpi.edu txie at scorec.rpi.edu
Tue Oct 14 19:22:13 CDT 2008


Jason,

Please see my comments below.  Thanks.

> I've been working on implementing the iMeshP interface for MOAB, and have
> encountered a tricky spot to implement:
>
> The intent here appears to be (please correct me if I'm misunderstanding
> this) that between creating a partition and parts and calling something
> like
> iMeshP_getNumGlobalParts, the application will call
> iMeshP_syncPartitionAll.
>    While this model works fine for partitions created by the application,
> it
> doesn't work very well for something like reading file containing N
> different partitioning of varying coarseness.


May you give us a more detailed explanation why it does not work well? I
still can not understand why iMeshP_getNumGlobalParts needs communication.

Thanks,

Ting

>
> The options I can think of for implementing iMeshP_getNumGlobalParts such
> that it can handle partitions read from files are:
>
> 1) Do global communication for all partitions during file load to cache
>     global number of parts for each partition read from file.
>
> 2) Require application to associate an an MPI_Comm with any partition (we
>     don't have an API for this) and call iMeshP_syncPartitionAll on it
>     before expecting to be able to get any global data.
>
> 3) Require application it associate an an MPI_Comm with any partition (we
>     don't have an API for this) and do the communication in
>     iMeshP_getNumGlobalParts
>
>
> The first option is unpleasant because it involves doing a bunch of
> communication to cache data that may never be used.  The second is rather
> convoluted.  The third option really seems like the best one to me.  Why
> did
> we decide that we should not be able to do communication in
> iMeshP_getNumGlobalParts?  Do we really expect any application to a) not
> call this collectively and b) call it more than once?
>
> thanks,
>
> - jason
>
>





More information about the itaps-parallel mailing list