[petsc-dev] PetscFEM routines missing user data arguments

Matthew Knepley knepley at gmail.com
Fri Nov 15 12:22:25 CST 2013


On Fri, Nov 15, 2013 at 12:14 PM, Geoffrey Irving <irving at naml.us> wrote:

> On Fri, Nov 15, 2013 at 9:59 AM, Matthew Knepley <knepley at gmail.com>
> wrote:
> > On Fri, Nov 15, 2013 at 11:42 AM, Geoffrey Irving <irving at naml.us>
> wrote:
> >>
> >> On Thu, Nov 14, 2013 at 6:21 PM, Matthew Knepley <knepley at gmail.com>
> >> wrote:
> >> > On Thu, Nov 14, 2013 at 8:14 PM, Geoffrey Irving <irving at naml.us>
> wrote:
> >> >>
> >> >> On Thu, Nov 14, 2013 at 6:03 PM, Matthew Knepley <knepley at gmail.com>
> >> >> wrote:
> >> >> > On Thu, Nov 14, 2013 at 7:43 PM, Geoffrey Irving <irving at naml.us>
> >> >> > wrote:
> >> >> >>
> >> >> >> It doesn't look like there's currently a way to pass user data
> down
> >> >> >> to
> >> >> >> the various fields in PetscFEM.  Can I add an extra argument to
> each
> >> >> >> function, or is that part of the API going to change in a way that
> >> >> >> would obviate the extra argument?
> >> >> >
> >> >> >
> >> >> > That is by design. Here is the rationale. I want to be able to use
> >> >> > CUDA
> >> >> > or OpenCL
> >> >> > to evaluate these low-level FEM integration routines, so you can't
> >> >> > pass
> >> >> > in arbitrary context.
> >> >> >
> >> >> > I really don't think we should need arbitrary information at the
> >> >> > lowest level integration.
> >> >> > Right now you can pass in information using auxiliary fields. How
> >> >> > would you use
> >> >> > other stuff?
> >> >>
> >> >> Well, you're probably not going to like application 1, which is
> >> >> passing around PyObject*'s so I can write little FEM explorations in
> >> >> numpy.  Is the field support the right thing to do for constant
> >> >> fields, or integer valued constant fields?
> >> >
> >> >
> >> > I should explain further. Here is the hierarchy:
> >> >
> >> >   SNESComputeFunction (global)
> >> >   --> SNESComputeFunction_DMLocal (l2g)
> >> >     --> DMPlexComputeResidualFEM (local)
> >> >       --> PetscFEIntegrateResidual (chunk of cells)
> >> >         --> kernel (batch of cells)
> >> >
> >> > I can be talked into a context all the way down to the kernel, which
> >> > only integrates a few cells
> >> > (like 32), but I would think that what you would do is write another
> >> > PetscFE that calls Python
> >> > (I have one for the CPU and one for OpenCL), and it has a context.
> >>
> >> Thanks for the explanation, that helps.  And apologies for my
> >> denseness, somehow I read the functions I was imagining adding a void*
> >> to as having another layer of loop, when actually they are called per
> >> quadrature point.  Obviously I need a slower more vectorized version
> >
> > Don't understand the vector comment.
>
> Replace
>
>     void f0_u(const PetscScalar u[], ...)
>     {
>       f0[0] = 4.0;
>     }
>
> with
>
>     void f0_u(const PetscInt n, PetscScalar u[], ...)
>     {
>       PetscInt i;
>       for (i=0; i<n; i++)
>         f0[i] = 4.0;
>     }
>
> and then call it either once with data for all quadrature points for a
> few times for batches.  I don't really think it's worth doing this,
> when I could instead spend time setting up OpenCL kernels.  Again, I
> somehow hallucinated that it was already like this, which is why I
> thought of writing a Python testbed.
>
> >> to send to Python as numpy arrays.  Since another version is required,
> >> I will probably abandon my plan to write Python unit tests (I only
> >> considered it because I hallucinated that it would be trivial), so I'm
> >> good to go.
> >>
> >> Getting back to actual physics examples, what would be the right way
> >> to implement a uniform shell pressure term in this context?  In other
> >> words, the force density at a point is pN, where p is a variable but
> >> globally constant pressure and N is the shell surface normal.  Should
> >> I just use a global variable?
> >
> >
> > I would probably use a field since then I can make it non-constant if I
> want, and
> > memory is not usually an issue.
>
> Will do.
>
> >> It seems like a slight design blemish that the OpenCL version owns the
> >> OpenCL versions of its compute routines but accepts the CPU versions
> >> as arguments, but feel free to ignore that as nitpickiness.
> >
> > That is absolute crap I agree. This is a stopgap until we learn how to
> have a
> > unified representation.
>
> Sounds good.  The most incremental solution would just be to move
> control over those functions down to PetscFE.  Indeed, they pretty
> much have to go at whichever level is supposed to know about OpenCL
> kernels and the like.


The problem here is that the DM is supposed to know about the physics, not
the FE.

   Matt


>
> Geoffrey
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20131115/ec39088e/attachment.html>


More information about the petsc-dev mailing list