[petsc-dev] DMPlex + P_1 FE interpolation

Matthew Knepley knepley at gmail.com
Mon Feb 17 12:37:29 CST 2020


On Mon, Feb 17, 2020 at 11:43 AM Pierre Jolivet <pierre.jolivet at enseeiht.fr>
wrote:

> While I wait for a fix, I’m trying to setup DMPlex in our code by small
> increments.
> I’m sorry for this very naive question but I have a hard time figuring out
> the DMPlex machinery for my specific needs.
> What I need right now after calling DMCreateFromFile and DMPlexDistribute
> with an overlap of 1 is to get local DMPlexes (defined on PETSC_COMM_SELF,
> or at least with a local numbering of the elements) with for all elements,
> the (original) owner rank. Then, my plan is to convert these DMPlexes into
> the format needed by our code + a P_0 function defined locally (with
> overlap 1) with the value of the partitioning. That’d be a start.
>

1) A DMPlex only ever has a local numbering, unless you explicitly create a
global numbering

2) I do not store the original rank of anything. It seems like something
that is just an intermediary to what you want. What is it that
    you are trying to do? Something you could do is to save the original
migrationSF from Distribute() and look up the sending rank in that.


> Do I need to call DMPlexCreateTwoSidedProcessSF (some of the arguments are
> missing from the man page,
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexCreateTwoSidedProcessSF.html) with
> the SF returned by DMPlexDistribute? Or DMCreateSubDomainDM_Plex (which is
> not accessible from outside of PETSc I think?)? Or am I looking at the
> wrong place and overcomplicating all this?
>

I think you may be. I am still trying to understand what you want to do.

  Thanks,

    Matt


> Thank you in advance for your help,
> Pierre
>
> On 10 Feb 2020, at 4:30 PM, Matthew Knepley <knepley at gmail.com> wrote:
>
> On Mon, Feb 10, 2020 at 6:49 AM Pierre Jolivet <pierre.jolivet at enseeiht.fr>
> wrote:
>
>>
>>
>> On 10 Feb 2020, at 1:08 PM, Matthew Knepley <knepley at gmail.com> wrote:
>>
>> On Sun, Feb 9, 2020 at 11:11 PM Pierre Jolivet <
>> pierre.jolivet at enseeiht.fr> wrote:
>>
>>>
>>>
>>> On 10 Feb 2020, at 6:20 AM, Matthew Knepley <knepley at gmail.com> wrote:
>>>
>>> On Sun, Feb 9, 2020 at 3:23 PM Pierre Jolivet <
>>> pierre.jolivet at enseeiht.fr> wrote:
>>>
>>>> Hello,
>>>> I’ve a hard time answering the following DMPlex questions by just
>>>> looking at some of the examples and manual.
>>>> Considering two DMPlex dm and dma, as in
>>>> petsc/src/dm/impls/plex/examples/tests/ex19.c, I’d like to interpolate a
>>>> simple P_1 FE function from dm to dma.
>>>> The DMCreateInterpolation call gives me:
>>>> [0]PETSC ERROR: Invalid argument
>>>> [0]PETSC ERROR: Number of fine indices 0 != 4 dual basis vecs
>>>> […]
>>>>
>>>
>>> It looks like your fine grid has no discretization, since 0 is
>>> numFIndices from
>>>
>>>   ierr = DMPlexGetClosureIndices(dmf, fsection, globalFSection, cell,
>>> &numFIndices, &findices, NULL);CHKERRQ(ierr);
>>>
>>>
>>>> [0]PETSC ERROR: #1 DMPlexComputeInterpolatorGeneral() line 2508 in
>>>> petsc/src/dm/impls/plex/plexfem.c
>>>> [0]PETSC ERROR: #2 DMCreateInterpolation_Plex() line 7688 in
>>>> petsc/src/dm/impls/plex/plex.c
>>>> [0]PETSC ERROR: #3 DMCreateInterpolation() line 1139 in
>>>> petsc/src/dm/interface/dm.c
>>>> But the DMs look OK, don’t they, cf. below?
>>>> So I have three simple questions:
>>>> 1) are all tests at the bottom of ex19.c broken because of PRAgMaTIc or
>>>> because of DMPlex currently not supporting some operations? (I’m not using
>>>> PRAgMaTIc to do mesh adaptation, so I was hoping to not run into an error)
>>>>
>>>
>>> I don't think its broken.
>>>
>>>
>>> Oh, OK. Could you help me figure out what’s the problem then, e.g., with
>>> a slight (command line) variation of test #6, please?
>>>
>>
>> Sure. I am at SIAM this week, but as soon as I can I will get you the fix.
>>
>>
>> Excellent.
>>
>>
>>
>>> $ cd src/dm/impls/plex/examples/tests
>>> $ git diff ex19.c
>>> $ make ex19
>>> $ mpirun ./ex19 -dim 3 -nbrVerEdge 10 -dm_plex_separate_marker 0 -met 0
>>> -hmin 0.1 -hmax 0.3 -init_dm_view -adapt_dm_view -do_L2 -petscspace_degree
>>> 1 -petscfe_default_quadrature_order 1 -dm_plex_hash_location
>>> [0]PETSC ERROR: Nonconforming object sizes
>>> [0]PETSC ERROR: The section point closure size 0 != dual space dimension
>>> 4
>>> […]
>>> [0]PETSC ERROR: #1 DMProjectLocal_Generic_Plex() line 633 in
>>> src/dm/impls/plex/plexproject.c
>>> [0]PETSC ERROR: #2 DMProjectFunctionLocal_Plex() line 771 in
>>> src/dm/impls/plex/plexproject.c
>>> [0]PETSC ERROR: #3 DMProjectFunctionLocal() line 7809 in
>>> src/dm/interface/dm.c
>>> [0]PETSC ERROR: #4 DMProjectFunction() line 7766 in src/dm/interface/dm.c
>>>
>>> If I comment the DMProjectFunction() call, I end up with the same error
>>> as in my first message in DMCreateInterpolation().
>>>
>>> 2) is DMCreateInterpolation + MatInterpolate the correct way of
>>>> transferring one Vec from a DMPlex onto another?
>>>>
>>>
>>> That is the intent.
>>>
>>>
>>>> 3) if yes, by looking at the names of the arguments in
>>>> DMPlexComputeInterpolatorGeneral, dmc and dmf, could you comment on the
>>>> performance of this function for unrelated meshes, e.g., if both DMs are
>>>> “fine” and not one coarse and the other fine (albeit non-nested), for
>>>> simple P_k spaces.
>>>>
>>>
>>> In general, it is going to be horrible. Here is what it does: locate the
>>> fine quadrature points in the coarse grid and interpolate to them. This
>>> quadrature can have huge errors if it falls across multiple cells. This is
>>> why the nested version works perfectly, and also why Patrick Farrell and
>>> James Maddison have the Supermesh library, which makes a refinement of the
>>> mesh until the quadrature is accurate everywhere. That way they guarantee
>>> that at least the zeroth moment is preserved.
>>>
>>>
>>> Two subquestions if I may:
>>> 1) are there any plans to have this integrated through an external
>>> package?
>>>
>>
>> In the absence of a plan, there is a hope. I would really like it to
>> happen.
>>
>>
>>> 2) if I understand you correctly, you answered about the numerical
>>> performance of the function. I can live with high interpolation errors if
>>> both meshes are “far" from each other. I was mostly interested in the
>>> parallel performance of the function.
>>>
>>
>> Everything is purely local except for point location. Since it has never
>> really been tested in this mode, I am sure the
>> scaling can be bad. I believe the default is to extrapolate if the point
>> is not covered, which makes sense for mostly
>> matching meshes. There is parallel point location, but it is intended for
>> a few points where we are sampling the solution,
>> rather than lots of points everywhere which you would get for
>> non-matching meshes with different distributions. Could
>> you say what kind of situation you are trying to optimize for?
>>
>>
>> As a disclaimer, I’m not trying to optimize anything yet. Just assessing
>> the potential benefits of using DMPlex.
>> We have many solvers using mesh adaptation, so I’ll give you just one
>> specific example.
>> When doing shape optimization, e.g.,
>> http://www.cmap.polytechnique.fr/~florian.feppon/videos/04_08.webm, we
>> redistribute an adapted mesh and reinterpolate a level-set.
>>
>
> Very cool.
>
>
>> The only missing piece to have something fully distributed, now that the
>> mesh adaptation may be done in parallel, is the interpolator, we basically
>> do (1) in Dave’s email.
>> It would probably not extremely hard to implement this better, but if I
>> can leverage DMPlex to take care of mesh (re)distribution + solution
>> interpolation, I’d prefer invest time in hooking this into our code instead
>> of implementing the wheel.
>>
>
> I think you can. I should have some time to get your ex19 thing going.
>
>   Thanks,
>
>     Matt
>
>
>> Thanks,
>> Pierre
>>
>>   Thanks,
>>
>>     Matt
>>
>>
>>> Thanks,
>>> Pierre
>>>
>>>   Thanks,
>>>
>>>     Matt
>>>
>>>
>>>> Thanks in advance for your help,
>>>> Pierre
>>>>
>>>> $ mpirun -n 1 ./ex19 -msh in.msh -init_dm_view ::ascii_info
>>>> -adapt_dm_view ::ascii_info -mat_view ::ascii_info -do_L2
>>>> -petscspace_degree 1
>>>> DM Object: DMinit 1 MPI processes
>>>>   type: plex
>>>> DMinit in 3 dimensions:
>>>>   0-cells: 1331
>>>>   1-cells: 7930
>>>>   2-cells: 12600
>>>>   3-cells: 6000
>>>> Labels:
>>>>   depth: 4 strata with value/size (0 (1331), 1 (7930), 2 (12600), 3
>>>> (6000))
>>>>   Face Sets: 6 strata with value/size (4 (200), 1 (200), 5 (200), 2
>>>> (200), 3 (200), 6 (200))
>>>>   Cell Sets: 1 strata with value/size (0 (6000))
>>>> DM Object: DMadapt (adapt_) 1 MPI processes
>>>>   type: plex
>>>> DMadapt in 3 dimensions:
>>>>   0-cells: 2905
>>>>   1-cells: 18888
>>>>   2-cells: 31368
>>>>   3-cells: 15384
>>>> Labels:
>>>>   depth: 4 strata with value/size (0 (2905), 1 (18888), 2 (31368), 3
>>>> (15384))
>>>>   Face Sets: 6 strata with value/size (1 (200), 4 (200), 6 (200), 2
>>>> (200), 5 (200), 3 (200))
>>>>   Cell Sets: 1 strata with value/size (0 (15384))
>>>>
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20200217/ad231f35/attachment.html>


More information about the petsc-dev mailing list