[petsc-users] Unique number in each element of a DMPlex mesh
Berend van Wachem
berend.vanwachem at ovgu.de
Fri Feb 9 07:55:40 CST 2024
Dear Matt,
On 2/8/24 17:04, Matthew Knepley wrote:
> On Thu, Feb 8, 2024 at 9:54 AM Berend van Wachem
> <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>> wrote:
>
> Dear Matt,
>
> I have now written a code to transform a DMPlex coming from a DMForest
> to a DMPlex of type DM_POLY, by removing the double faces and edges.
> Also, I can successfully write this transformed DMPlex of type DM_POLY
> to a file.
>
> However, the code to perform the transformation only works on a single
> process. Therefore, I distribute the original DMForest to process 0,
> perform the transformation to obtain a DMPlex of type DM_POLY.
>
> Is there a way, I can redistribute the transformed DMPlex back to the
> original distribution based on the cells' distribution of the original
> DMPlex? Can I somehow "store" the distribution of a DMPlex based on how
> the cells are divided over the processes, and distribute back to this
> original distribution?
>
>
> Yes, in fact that exact code exists and is tested. The "shell" type for
> PetscPartitioner can take in the points intended for each process. As
> long as point numbers do not change, this works fine.
Could you please point me to where I can find this code? Or send it to
me? I'm very grateful for this!
> Second, I bet I could make your conversion routine parallel.
Please find attached the code I wrote. It creates a new DMPlex, without
the double faces and edges which are present in the DMPlex from a
DMForest. For this, it uses a simple hash list, for which I also attach
the include file (uthash.h). The code only works on 1 process.
I must apologize for my probably naive way of doing this - but with my
limited knowledge about DMPlex it is the easiest and most robust way I
could find.
If you could give me some pointers on how to make the routine work in
parallel, that would be very useful.
Otherwise, I will just redistribute the DMPlex back to the same cell
layout as before (see the previous point), that will suffice for most of
the problems I work on.
> Third, I think Toby has fixed that other bug with coordinates (finger
> crossed). We are running the tests now.
Thank you very much!
On a side note, I had problems writing a DMPlex from a DMPOLY to HDF5
files on more than one process, and I debugged it to a wait-state in the
MPI IO in one of the processes. This would result in either opening or
writing to an HDF5 file getting stuck about 75% of the time. I was using
OpenMPI.
I have now changed to using MPICH and the problem has gone away - it now
works like a charm. I'm glad I found it, but also a little puzzled.
Thanks and best regards,
Berend.
> Thanks,
>
> Matt
>
> Thanks, best regards,
> Berend.
>
> p.s. If someone is interested in the code to transform a DMForest to a
> DMPlex of type DM_POLY, I am more than happy to share.
>
> On 1/22/24 20:30, Matthew Knepley wrote:
> > On Mon, Jan 22, 2024 at 2:26 PM Berend van Wachem
> > <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>>
> wrote:
> >
> > Dear Matt,
> >
> > The problem is that I haven't figured out how to write a
> polyhedral
> > DMplex in parallel. So, currently, I can write the Vec data
> > in parallel, but the cones for the cells/faces/edges/nodes
> for the
> > mesh from just one process to a file (after gathering the
> > DMplex to a single process).
> >
> >
> > Ah shoot. Can you send me a polyhedral mesh (or code to generate
> one) so
> > I can fix the parallel write problem? Or maybe it is already an
> issue
> > and I forgot?
> >
> > From the restart, I can then read the cone information from one
> > process from the file, recreate the DMPlex, and then
> > redistribute it. In this scenario, the Vec data I read in (in
> > parallel) will not match the correct cells of the DMplex.
> Hence, I
> > need to put it in the right place afterwards.
> >
> >
> > Yes, then searching makes sense. You could call DMLocatePoints(),
> but
> > maybe you are doing that.
> >
> > Thanks,
> >
> > Matt
> >
> > Best, Berend.
> >
> > On 1/22/24 20:03, Matthew Knepley wrote:
> > > On Mon, Jan 22, 2024 at 1:57 PM Berend van Wachem
> > <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>
> > <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de> <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>>>>
> > wrote:
> > >
> > > Dear Matt,
> > >
> > > Thanks for your quick response.
> > > I have a DMPlex with a polyhedral mesh, and have defined a
> > number of vectors with data at the cell center. I have generated
> > > data
> > > for a number of timesteps, and I write the data for each
> > point to a file together with the (x,y,z) co-ordinate of the cell
> > > center.
> > >
> > > When I want to do a restart from the DMPlex, I
> recreate the
> > DMplex with the polyhedral mesh, redistribute it, and for
> each cell
> > > center find the corresponding (x,y,z) co-ordinate and
> insert
> > the data that corresponds to it. This is quite expensive, as it
> > > means I need to compare doubles very often.
> > >
> > > But reading your response, this may not be a bad way
> of doing it?
> > >
> > >
> > > It always seems to be a game of "what do you want to
> assume?". I
> > tend to assume that I wrote the DM and Vec in the same order,
> > > so when I load them they match. This is how Firedrake I/O
> works,
> > so that you can load up on a different number of processes
> > > (https://arxiv.org/abs/2401.05868
> <https://arxiv.org/abs/2401.05868>
> > <https://arxiv.org/abs/2401.05868
> <https://arxiv.org/abs/2401.05868>>
> <https://arxiv.org/abs/2401.05868 <https://arxiv.org/abs/2401.05868>
> > <https://arxiv.org/abs/2401.05868
> <https://arxiv.org/abs/2401.05868>>>).
> > >
> > > So, are you writing a Vec, and then redistributing and writing
> > another Vec? In the scheme above, you would have to write both
> > > DMs. Are you trying to avoid this?
> > >
> > > Thanks,
> > >
> > > Matt
> > >
> > > Thanks,
> > >
> > > Berend.
> > >
> > > On 1/22/24 18:58, Matthew Knepley wrote:
> > > > On Mon, Jan 22, 2024 at 10:49 AM Berend van Wachem
> > <berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>
> <mailto:berend.vanwachem at ovgu.de <mailto:berend.vanwachem at ovgu.de>>
> > <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de> <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>>>
> > > <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>
> > <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>> <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>
> > <mailto:berend.vanwachem at ovgu.de
> <mailto:berend.vanwachem at ovgu.de>>>>> wrote:
> > > >
> > > > Dear Petsc-Team,
> > > >
> > > > Is there a good way to define a unique integer
> number
> > in each element
> > > > (e.g. a cell) of a DMPlex mesh, which is in the
> same
> > location,
> > > > regardless of the number of processors or the
> > distribution of the mesh
> > > > over the processors?
> > > >
> > > > So, for instance, if I have a DMPlex box mesh, the
> > top-right-front
> > > > corner element (e.g. cell) will always have the
> same
> > unique number,
> > > > regardless of the number of processors the mesh is
> > distributed over?
> > > >
> > > > I want to be able to link the results I have
> achieved
> > with a mesh from
> > > > DMPlex on a certain number of cores to the same
> mesh
> > from a DMPlex on a
> > > > different number of cores.
> > > >
> > > > Of course, I could make a tree based on the
> distance
> > of each element to
> > > > a certain point (based on the X,Y,Z co-ordinates of
> > the element), and go
> > > > through this tree in the same way and define an
> > integer based on this,
> > > > but that seems rather cumbersome.
> > > >
> > > >
> > > > I think this is harder than it sounds. The distance
> will
> > not work because it can be very degenerate.
> > > > You could lexicographically sort the coordinates,
> but this
> > is hard in parallel. It is fine if you are willing
> > > > to gather everything on one process. You could put
> down a
> > p4est, use the Morton order to number them since this is stable
> > > for a
> > > > given refinement. And then within each box
> > lexicographically sort the centroids. This is definitely
> cumbersome,
> > but I cannot
> > > > think of anything else. This also might have parallel
> > problems since you need to know how much overlap you need to fill
> > > each box.
> > > >
> > > > Thanks,
> > > >
> > > > Matt
> > > >
> > > > Thanks and best regards, Berend.
> > > >
> > > > --
> > > > What most experimenters take for granted before
> they begin
> > their experiments is infinitely more interesting than any
> > > results to
> > > > which their experiments lead.
> > > > -- Norbert Wiener
> > > >
> > > > https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>
> > <https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>>
> > <https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>
> > <https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>>>
> > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
> > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>>
> > > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
> > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>>>>
> > >
> > >
> > >
> > > --
> > > What most experimenters take for granted before they begin
> their
> > experiments is infinitely more interesting than any results to
> > > which their experiments lead.
> > > -- Norbert Wiener
> > >
> > > https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>
> > <https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>>
> > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
> > <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>>>
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> > their experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/
> <https://www.cse.buffalo.edu/~knepley/>
> <http://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>>
>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which
> their experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: uthash.h
Type: text/x-chdr
Size: 104934 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240209/23c1a64e/attachment-0002.bin>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: dmforesttransform.c
Type: text/x-csrc
Size: 37318 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20240209/23c1a64e/attachment-0003.bin>
More information about the petsc-users
mailing list