[petsc-dev] PetscSection
Matthew Knepley
knepley at gmail.com
Thu Nov 8 22:58:41 CST 2012
On Thu, Nov 8, 2012 at 11:52 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Thu, Nov 8, 2012 at 10:31 PM, Matthew Knepley <knepley at gmail.com> wrote:
>>
>> This has been redone to remove Jed's objection (see new manual section).
>> It now
>> uses DM temp arrays,
>>
>> for(c = cStart; c < cEnd; ++c) {
>> PetscInt numVals;
>> PetscScalar *vals;
>>
>> DMComplexVecGetClosure(dm, section, vec, c, &numVals, &vals);
>> /* Compute residual */
>> DMComplexVecRestoreClosure(dm, section, vec, c, &numVals, &vals);
>> DMComplexVecSetClosure(dm, section, resvec, c, vals, ADD_VALUES);
>> }
>
>
> My other problem here is that VecGetArray() is potentially expensive (need
> to check coherence with other vectors, might not be contiguous for some Vec
> implementations).
I don't care about expensive VecGetArray(). We do it absolutely
everywhere in PETSc.
We recommend that users do it. This is not a real objection.
>>
>> > 2. Add DMComplex-specific access routines so the user does not need to
>> > see
>> > the PetscSection. Presumably this would be something like
>> > DMComplexGetPointOffset(dm,c,&offset); // offset into owned part of
>> > global
>> > vector?
>> > DMComplexGetPointOffsetLocal(dm,c,&loffset); // offset into local vector
>>
>> This is cumbersome because you must make another DM for every PetscSection
>> you want to use.
>
>
> What other sections do users need? A trace space?
Blaise uses a couple for every problem. We have several in PyLith
(ground surface,
fault).
>>
>> However, I am not completely against this now because
>> DMComplexClone()
>> is easy and accomplishes this.
>>
> [...]
>>
>>
>> I hate cursors. I had the same experience with them no matter what
>> they are called
>> (iterators, etc.) You need so much information, that you end up with
>> the whole object
>> in this "external" thing. I think they never pay off.
>
>
> How do you intend to support many threads calling DMComplexVecGetClosure(),
> perhaps each with multiple buffers, without expensive instructions?
GetClosure() is not a problem. All threads are reading from the same
topology, and
writing to separate output buffers. SetClosure() needs to be locked.
>>
>>
>> These are for multi-field splitting/
>
>
> Clearly, but why do they need to be in PetscSection? We already need DMs
> associated with fields.
Someone needs to know the splitting. If you had a DM, this info is
still stuck down
somewhere, so that DM is holding other DMs.
Matt
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener
More information about the petsc-dev
mailing list