[petsc-users] Unique number in each element of a DMPlex mesh

Matthew Knepley knepley at gmail.com
Thu Feb 8 10:04:01 CST 2024


On Thu, Feb 8, 2024 at 9:54 AM Berend van Wachem <berend.vanwachem at ovgu.de>
wrote:

> Dear Matt,
>
> I have now written a code to transform a DMPlex coming from a DMForest
> to a DMPlex of type DM_POLY, by removing the double faces and edges.
> Also, I can successfully write this transformed DMPlex of type DM_POLY
> to a file.
>
> However, the code to perform the transformation only works on a single
> process. Therefore, I distribute the original DMForest to process 0,
> perform the transformation to obtain a DMPlex of type DM_POLY.
>
> Is there a way, I can redistribute the transformed DMPlex back to the
> original distribution based on the cells' distribution of the original
> DMPlex? Can I somehow "store" the distribution of a DMPlex based on how
> the cells are divided over the processes, and distribute back to this
> original distribution?
>

Yes, in fact that exact code exists and is tested. The "shell" type for
PetscPartitioner can take in the points intended for each process. As long
as point numbers do not change, this works fine.

Second, I bet I could make your conversion routine parallel.

Third, I think Toby has fixed that other bug with coordinates (finger
crossed). We are running the tests now.

  Thanks,

     Matt


> Thanks, best regards,
> Berend.
>
> p.s. If someone is interested in the code to transform a DMForest to a
> DMPlex of type DM_POLY, I am more than happy to share.
>
> On 1/22/24 20:30, Matthew Knepley wrote:
> > On Mon, Jan 22, 2024 at 2:26 PM Berend van Wachem
> > <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>> wrote:
> >
> >     Dear Matt,
> >
> >     The problem is that I haven't figured out how to write a polyhedral
> >     DMplex in parallel. So, currently, I can write the Vec data
> >     in parallel, but the cones for the cells/faces/edges/nodes for the
> >     mesh from just one process to a file (after gathering the
> >     DMplex to a single process).
> >
> >
> > Ah shoot. Can you send me a polyhedral mesh (or code to generate one) so
> > I can fix the parallel write problem? Or maybe it is already an issue
> > and I forgot?
> >
> >       From the restart, I can then read the cone information from one
> >     process from the file, recreate the DMPlex, and then
> >     redistribute it. In this scenario, the Vec data I read in (in
> >     parallel) will not match the correct cells of the DMplex. Hence, I
> >     need to put it in the right place afterwards.
> >
> >
> > Yes, then searching makes sense. You could call DMLocatePoints(), but
> > maybe you are doing that.
> >
> >    Thanks,
> >
> >       Matt
> >
> >     Best, Berend.
> >
> >     On 1/22/24 20:03, Matthew Knepley wrote:
> >      > On Mon, Jan 22, 2024 at 1:57 PM Berend van Wachem
> >     <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> >     <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>>
> >     wrote:
> >      >
> >      >     Dear Matt,
> >      >
> >      >     Thanks for your quick response.
> >      >     I have a DMPlex with a polyhedral mesh, and have defined a
> >     number of vectors with data at the cell center. I have generated
> >      >     data
> >      >     for a number of timesteps, and I write the data for each
> >     point to a file together with the (x,y,z) co-ordinate of the cell
> >      >     center.
> >      >
> >      >     When I want to do a restart from the DMPlex, I recreate the
> >     DMplex with the polyhedral mesh, redistribute it, and for each cell
> >      >     center find the corresponding (x,y,z) co-ordinate and insert
> >     the data that corresponds to it. This is quite expensive, as it
> >      >     means I need to compare doubles very often.
> >      >
> >      >     But reading your response, this may not be a bad way of doing
> it?
> >      >
> >      >
> >      > It always seems to be a game of "what do you want to assume?". I
> >     tend to assume that I wrote the DM and Vec in the same order,
> >      > so when I load them they match. This is how Firedrake I/O works,
> >     so that you can load up on a different number of processes
> >      > (https://arxiv.org/abs/2401.05868
> >     <https://arxiv.org/abs/2401.05868> <https://arxiv.org/abs/2401.05868
> >     <https://arxiv.org/abs/2401.05868>>).
> >      >
> >      > So, are you writing a Vec, and then redistributing and writing
> >     another Vec? In the scheme above, you would have to write both
> >      > DMs. Are you trying to avoid this?
> >      >
> >      >    Thanks,
> >      >
> >      >       Matt
> >      >
> >      >     Thanks,
> >      >
> >      >     Berend.
> >      >
> >      >     On 1/22/24 18:58, Matthew Knepley wrote:
> >      >      > On Mon, Jan 22, 2024 at 10:49 AM Berend van Wachem
> >     <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> >     <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>
> >      >     <mailto:berend.vanwachem at ovgu.de
> >     <mailto:berend.vanwachem at ovgu.de> <mailto:berend.vanwachem at ovgu.de
> >     <mailto:berend.vanwachem at ovgu.de>>>> wrote:
> >      >      >
> >      >      >     Dear Petsc-Team,
> >      >      >
> >      >      >     Is there a good way to define a unique integer number
> >     in each element
> >      >      >     (e.g. a cell) of a DMPlex mesh, which is in the same
> >     location,
> >      >      >     regardless of the number of processors or the
> >     distribution of the mesh
> >      >      >     over the processors?
> >      >      >
> >      >      >     So, for instance, if I have a DMPlex box mesh, the
> >     top-right-front
> >      >      >     corner element (e.g. cell) will always have the same
> >     unique number,
> >      >      >     regardless of the number of processors the mesh is
> >     distributed over?
> >      >      >
> >      >      >     I want to be able to link the results I have achieved
> >     with a mesh from
> >      >      >     DMPlex on a certain number of cores to the same mesh
> >     from a DMPlex on a
> >      >      >     different number of cores.
> >      >      >
> >      >      >     Of course, I could make a tree based on the distance
> >     of each element to
> >      >      >     a certain point (based on the X,Y,Z co-ordinates of
> >     the element), and go
> >      >      >     through this tree in the same way and define an
> >     integer based on this,
> >      >      >     but that seems rather cumbersome.
> >      >      >
> >      >      >
> >      >      > I think this is harder than it sounds. The distance will
> >     not work because it can be very degenerate.
> >      >      > You could lexicographically sort the coordinates, but this
> >     is hard in parallel. It is fine if you are willing
> >      >      > to gather everything on one process. You could put down a
> >     p4est, use the Morton order to number them since this is stable
> >      >     for a
> >      >      > given refinement. And then within each box
> >     lexicographically sort the centroids. This is definitely cumbersome,
> >     but I cannot
> >      >      > think of anything else. This also might have parallel
> >     problems since you need to know how much overlap you need to fill
> >      >     each box.
> >      >      >
> >      >      >    Thanks,
> >      >      >
> >      >      >        Matt
> >      >      >
> >      >      >     Thanks and best regards, Berend.
> >      >      >
> >      >      > --
> >      >      > What most experimenters take for granted before they begin
> >     their experiments is infinitely more interesting than any
> >      >     results to
> >      >      > which their experiments lead.
> >      >      > -- Norbert Wiener
> >      >      >
> >      >      > https://www.cse.buffalo.edu/~knepley/
> >     <https://www.cse.buffalo.edu/~knepley/>
> >     <https://www.cse.buffalo.edu/~knepley/
> >     <https://www.cse.buffalo.edu/~knepley/>>
> >     <http://www.cse.buffalo.edu/~knepley/
> >     <http://www.cse.buffalo.edu/~knepley/>
> >      >     <http://www.cse.buffalo.edu/~knepley/
> >     <http://www.cse.buffalo.edu/~knepley/>>>
> >      >
> >      >
> >      >
> >      > --
> >      > What most experimenters take for granted before they begin their
> >     experiments is infinitely more interesting than any results to
> >      > which their experiments lead.
> >      > -- Norbert Wiener
> >      >
> >      > https://www.cse.buffalo.edu/~knepley/
> >     <https://www.cse.buffalo.edu/~knepley/>
> >     <http://www.cse.buffalo.edu/~knepley/
> >     <http://www.cse.buffalo.edu/~knepley/>>
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> > their experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/ <
> http://www.cse.buffalo.edu/~knepley/>
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240208/3e724905/attachment-0001.html>


More information about the petsc-users mailing list