[petsc-dev] PetscFEM routines missing user data arguments
Matthew Knepley
knepley at gmail.com
Fri Nov 15 11:59:41 CST 2013
On Fri, Nov 15, 2013 at 11:42 AM, Geoffrey Irving <irving at naml.us> wrote:
> On Thu, Nov 14, 2013 at 6:21 PM, Matthew Knepley <knepley at gmail.com>
> wrote:
> > On Thu, Nov 14, 2013 at 8:14 PM, Geoffrey Irving <irving at naml.us> wrote:
> >>
> >> On Thu, Nov 14, 2013 at 6:03 PM, Matthew Knepley <knepley at gmail.com>
> >> wrote:
> >> > On Thu, Nov 14, 2013 at 7:43 PM, Geoffrey Irving <irving at naml.us>
> wrote:
> >> >>
> >> >> It doesn't look like there's currently a way to pass user data down
> to
> >> >> the various fields in PetscFEM. Can I add an extra argument to each
> >> >> function, or is that part of the API going to change in a way that
> >> >> would obviate the extra argument?
> >> >
> >> >
> >> > That is by design. Here is the rationale. I want to be able to use
> CUDA
> >> > or OpenCL
> >> > to evaluate these low-level FEM integration routines, so you can't
> pass
> >> > in arbitrary context.
> >> >
> >> > I really don't think we should need arbitrary information at the
> lowest level integration.
> >> > Right now you can pass in information using auxiliary fields. How
> would you use
> >> > other stuff?
> >>
> >> Well, you're probably not going to like application 1, which is
> >> passing around PyObject*'s so I can write little FEM explorations in
> >> numpy. Is the field support the right thing to do for constant
> >> fields, or integer valued constant fields?
> >
> >
> > I should explain further. Here is the hierarchy:
> >
> > SNESComputeFunction (global)
> > --> SNESComputeFunction_DMLocal (l2g)
> > --> DMPlexComputeResidualFEM (local)
> > --> PetscFEIntegrateResidual (chunk of cells)
> > --> kernel (batch of cells)
> >
> > I can be talked into a context all the way down to the kernel, which
> only integrates a few cells
> > (like 32), but I would think that what you would do is write another
> PetscFE that calls Python
> > (I have one for the CPU and one for OpenCL), and it has a context.
>
> Thanks for the explanation, that helps. And apologies for my
> denseness, somehow I read the functions I was imagining adding a void*
> to as having another layer of loop, when actually they are called per
> quadrature point. Obviously I need a slower more vectorized version
>
Don't understand the vector comment.
> to send to Python as numpy arrays. Since another version is required,
> I will probably abandon my plan to write Python unit tests (I only
> considered it because I hallucinated that it would be trivial), so I'm
> good to go.
>
> Getting back to actual physics examples, what would be the right way
> to implement a uniform shell pressure term in this context? In other
> words, the force density at a point is pN, where p is a variable but
> globally constant pressure and N is the shell surface normal. Should
> I just use a global variable?
>
I would probably use a field since then I can make it non-constant if I
want, and
memory is not usually an issue.
> It seems like a slight design blemish that the OpenCL version owns the
> OpenCL versions of its compute routines but accepts the CPU versions
> as arguments, but feel free to ignore that as nitpickiness.
That is absolute crap I agree. This is a stopgap until we learn how to have
a
unified representation.
Matt
>
> Geoffrey
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131115/423e2698/attachment.html>
More information about the petsc-dev
mailing list