[petsc-users] Unique number in each element of a DMPlex mesh
Berend van Wachem
berend.vanwachem at ovgu.de
Tue Jan 23 06:14:44 CST 2024
Dear Matt,
Please find attached a test for writing a DMPlex with hanging nodes,
which is based on a refined DMForest. I've linked the code with the
current main git version of Petsc.
When the DMPlex gets written to disc, the code crashes with
[0]PETSC ERROR: Unknown discretization type for field 0
although I specifically set the discretization for the DMPlex.
The DMPlex based on a DMForest has "double" faces when there is a jump
in cell size: the larger cell has one large face towards the refined
cells, and the adjacent 4 smaller cells each have a face as well.
I have written a function to remove the large face in such instances,
rebuilding the DM, which seems to work. But I can only do this on 1
process and therefore lose the connectivity between the DM and the
locations of the data points of the vector.
I can open an issue on the Petsc git, if you prefer?
Thanks and best regards,
Berend.
On 1/22/24 20:30, Matthew Knepley wrote:
> On Mon, Jan 22, 2024 at 2:26 PM Berend van Wachem
> <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>> wrote:
>
> Dear Matt,
>
> The problem is that I haven't figured out how to write a polyhedral
> DMplex in parallel. So, currently, I can write the Vec data
> in parallel, but the cones for the cells/faces/edges/nodes for the
> mesh from just one process to a file (after gathering the
> DMplex to a single process).
>
>
> Ah shoot. Can you send me a polyhedral mesh (or code to generate one) so
> I can fix the parallel write problem? Or maybe it is already an issue
> and I forgot?
>
> From the restart, I can then read the cone information from one
> process from the file, recreate the DMPlex, and then
> redistribute it. In this scenario, the Vec data I read in (in
> parallel) will not match the correct cells of the DMplex. Hence, I
> need to put it in the right place afterwards.
>
>
> Yes, then searching makes sense. You could call DMLocatePoints(), but
> maybe you are doing that.
>
> Thanks,
>
> Matt
>
> Best, Berend.
>
> On 1/22/24 20:03, Matthew Knepley wrote:
> > On Mon, Jan 22, 2024 at 1:57 PM Berend van Wachem
> <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>>
> wrote:
> >
> > Dear Matt,
> >
> > Thanks for your quick response.
> > I have a DMPlex with a polyhedral mesh, and have defined a
> number of vectors with data at the cell center. I have generated
> > data
> > for a number of timesteps, and I write the data for each
> point to a file together with the (x,y,z) co-ordinate of the cell
> > center.
> >
> > When I want to do a restart from the DMPlex, I recreate the
> DMplex with the polyhedral mesh, redistribute it, and for each cell
> > center find the corresponding (x,y,z) co-ordinate and insert
> the data that corresponds to it. This is quite expensive, as it
> > means I need to compare doubles very often.
> >
> > But reading your response, this may not be a bad way of doing it?
> >
> >
> > It always seems to be a game of "what do you want to assume?". I
> tend to assume that I wrote the DM and Vec in the same order,
> > so when I load them they match. This is how Firedrake I/O works,
> so that you can load up on a different number of processes
> > (https://arxiv.org/abs/2401.05868
> <https://arxiv.org/abs/2401.05868> <https://arxiv.org/abs/2401.05868
> <https://arxiv.org/abs/2401.05868>>).
> >
> > So, are you writing a Vec, and then redistributing and writing
> another Vec? In the scheme above, you would have to write both
> > DMs. Are you trying to avoid this?
> >
> > Thanks,
> >
> > Matt
> >
> > Thanks,
> >
> > Berend.
> >
> > On 1/22/24 18:58, Matthew Knepley wrote:
> > > On Mon, Jan 22, 2024 at 10:49 AM Berend van Wachem
> <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>
> > <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de> <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>>>> wrote:
> > >
> > > Dear Petsc-Team,
> > >
> > > Is there a good way to define a unique integer number
> in each element
> > > (e.g. a cell) of a DMPlex mesh, which is in the same
> location,
> > > regardless of the number of processors or the
> distribution of the mesh
> > > over the processors?
> > >
> > > So, for instance, if I have a DMPlex box mesh, the
> top-right-front
> > > corner element (e.g. cell) will always have the same
> unique number,
> > > regardless of the number of processors the mesh is
> distributed over?
> > >
> > > I want to be able to link the results I have achieved
> with a mesh from
> > > DMPlex on a certain number of cores to the same mesh
> from a DMPlex on a
> > > different number of cores.
> > >
> > > Of course, I could make a tree based on the distance
> of each element to
> > > a certain point (based on the X,Y,Z co-ordinates of
> the element), and go
> > > through this tree in the same way and define an
> integer based on this,
> > > but that seems rather cumbersome.
> > >
> > >
> > > I think this is harder than it sounds. The distance will
> not work because it can be very degenerate.
> > > You could lexicographically sort the coordinates, but this
> is hard in parallel. It is fine if you are willing
> > > to gather everything on one process. You could put down a
> p4est, use the Morton order to number them since this is stable
> > for a
> > > given refinement. And then within each box
> lexicographically sort the centroids. This is definitely cumbersome,
> but I cannot
> > > think of anything else. This also might have parallel
> problems since you need to know how much overlap you need to fill
> > each box.
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > Thanks and best regards, Berend.
> > >
> > > --
> > > What most experimenters take for granted before they begin
> their experiments is infinitely more interesting than any
> > results to
> > > which their experiments lead.
> > > -- Norbert Wiener
> > >
> > > https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>
> <https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>>
> <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
> > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>>>
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to
> > which their experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>
> <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: dmsavetest.c
Type: text/x-csrc
Size: 6685 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240123/8b5e9ac3/attachment.bin>
More information about the petsc-users
mailing list