[petsc-dev] programming model for PETSc
Barry Smith
bsmith at mcs.anl.gov
Thu Nov 24 21:30:06 CST 2011
On Nov 24, 2011, at 6:41 PM, Jed Brown wrote:
> On Thu, Nov 24, 2011 at 18:30, Dmitry Karpeev <karpeev at mcs.anl.gov> wrote:
> In any event, this business of storing explicit ranks/offsets for the ghost indices is sort of antithetical to the current
> PetscLayout-based approach of indices having parallel semantics from the outset. A lot of conceptual difficulties
> in Sieve and MOAB (and the need for the extra objects such as Overlap and ParallelComm (sic!)) stem from a lack of
> a global address space.
>
> Global addresses and (rank, offset) are equivalent modulo MPI_Scan() which costs essentially the same as MPI_Reduce(). Not free, but not huge when done in setup. I think global indices are usually easier for users to play with (if they need anything that has global semantics, local indices combined with a local-to-global map is even better when sufficient), but they eventually need to be converted to (rank, offset) for communication (MPI does not know about a "global index" and I don't think it should.)
Why don't you add PetscGlobalIndexToRankLocalIndex(MPI_Comm,PetscInt n, PetscInt *indices,PetscInt *ranklocalindex); and PetscRankLocalIndexToGlobalIndex() (or better names) to PETSc as utilities if it is so simple and needed everywhere?
Barry
More information about the petsc-dev
mailing list