[petsc-dev] PetscFEM routines missing user data arguments
Geoffrey Irving
irving at naml.us
Fri Nov 15 11:42:32 CST 2013
On Thu, Nov 14, 2013 at 6:21 PM, Matthew Knepley <knepley at gmail.com> wrote:
> On Thu, Nov 14, 2013 at 8:14 PM, Geoffrey Irving <irving at naml.us> wrote:
>>
>> On Thu, Nov 14, 2013 at 6:03 PM, Matthew Knepley <knepley at gmail.com>
>> wrote:
>> > On Thu, Nov 14, 2013 at 7:43 PM, Geoffrey Irving <irving at naml.us> wrote:
>> >>
>> >> It doesn't look like there's currently a way to pass user data down to
>> >> the various fields in PetscFEM. Can I add an extra argument to each
>> >> function, or is that part of the API going to change in a way that
>> >> would obviate the extra argument?
>> >
>> >
>> > That is by design. Here is the rationale. I want to be able to use CUDA
>> > or OpenCL
>> > to evaluate these low-level FEM integration routines, so you can't pass
>> > in arbitrary context.
>> >
>> > I really don't think we should need arbitrary information at the lowest level integration.
>> > Right now you can pass in information using auxiliary fields. How would you use
>> > other stuff?
>>
>> Well, you're probably not going to like application 1, which is
>> passing around PyObject*'s so I can write little FEM explorations in
>> numpy. Is the field support the right thing to do for constant
>> fields, or integer valued constant fields?
>
>
> I should explain further. Here is the hierarchy:
>
> SNESComputeFunction (global)
> --> SNESComputeFunction_DMLocal (l2g)
> --> DMPlexComputeResidualFEM (local)
> --> PetscFEIntegrateResidual (chunk of cells)
> --> kernel (batch of cells)
>
> I can be talked into a context all the way down to the kernel, which only integrates a few cells
> (like 32), but I would think that what you would do is write another PetscFE that calls Python
> (I have one for the CPU and one for OpenCL), and it has a context.
Thanks for the explanation, that helps. And apologies for my
denseness, somehow I read the functions I was imagining adding a void*
to as having another layer of loop, when actually they are called per
quadrature point. Obviously I need a slower more vectorized version
to send to Python as numpy arrays. Since another version is required,
I will probably abandon my plan to write Python unit tests (I only
considered it because I hallucinated that it would be trivial), so I'm
good to go.
Getting back to actual physics examples, what would be the right way
to implement a uniform shell pressure term in this context? In other
words, the force density at a point is pN, where p is a variable but
globally constant pressure and N is the shell surface normal. Should
I just use a global variable?
It seems like a slight design blemish that the OpenCL version owns the
OpenCL versions of its compute routines but accepts the CPU versions
as arguments, but feel free to ignore that as nitpickiness.
Geoffrey
More information about the petsc-dev
mailing list