[petsc-dev] programming model for PETSc
Matthew Knepley
knepley at gmail.com
Thu Nov 24 18:56:05 CST 2011
On Thu, Nov 24, 2011 at 6:41 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Thu, Nov 24, 2011 at 18:30, Dmitry Karpeev <karpeev at mcs.anl.gov> wrote:
>
>> In any event, this business of storing explicit ranks/offsets for the
>> ghost indices is sort of antithetical to the current
>> PetscLayout-based approach of indices having parallel semantics from the
>> outset. A lot of conceptual difficulties
>> in Sieve and MOAB (and the need for the extra objects such as Overlap and
>> ParallelComm (sic!)) stem from a lack of
>> a global address space.
>>
>
> Global addresses and (rank, offset) are equivalent modulo MPI_Scan() which
> costs essentially the same as MPI_Reduce(). Not free, but not huge when
> done in setup. I think global indices are usually easier for users to play
> with (if they need anything that has global semantics, local indices
> combined with a local-to-global map is even better when sufficient), but
> they eventually need to be converted to (rank, offset) for communication
> (MPI does not know about a "global index" and I don't think it should.)
>
I think the idea of a local point of view is fundamental, not just for
parallelism. We use LocalToGlobal mappings in FS so that yu do not have
to know about other fields, just the same as you should not have to know
about other processes.
Matt
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20111124/ad53e89e/attachment.html>
More information about the petsc-dev
mailing list