[petsc-users] What does DMPlexDistribute actually do?

Matthew Knepley knepley at gmail.com
Sun Mar 25 19:16:10 CDT 2018


On Sun, Mar 25, 2018 at 7:44 PM, Ellen M. Price <ellen.price at cfa.harvard.edu
> wrote:

> Here is the full minimal (non)-working example and the problematic
> output. I'm using PETSc 3.8.0.
>

Its not Distribute that is broken, its the cylinder mesh thing. It was
written only in serial, so it makes the same
mesh on every process. Here is an example with a box that works.

master *$:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$
/PETSc3/petsc/petsc-dev/arch-master-debug/bin/mpiexec -n 2 ./tester -refine
5 -petscpartitioner_type simple

DM Object: 2 MPI processes

  type: plex

DM_0x84000004_0 in 2 dimensions:

  0-cells: 36 0

  1-cells: 60 0

  2-cells: 25 0

Labels:

  depth: 3 strata with value/size (0 (36), 1 (60), 2 (25))

  Face Sets: 4 strata with value/size (4 (5), 2 (5), 1 (5), 3 (5))

  marker: 1 strata with value/size (1 (40))

DM Object: Parallel Mesh 2 MPI processes

  type: plex

Parallel Mesh in 2 dimensions:

  0-cells: 22 21

  1-cells: 34 32

  2-cells: 13 12

Labels:

  Face Sets: 3 strata with value/size (1 (5), 2 (2), 4 (3))

  marker: 1 strata with value/size (1 (21))

  depth: 3 strata with value/size (0 (22), 1 (34), 2 (13))

I will fix the cylinder mesh for you. Should be in the dev branch this week.

  Thanks for reporting that,

     Matt


> #include <petscdm.h>
>
> #include <petscdmplex.h>
>
>
> int main(int argc, char *argv[])
> {
>     PetscInt refine = 0;
>     PetscErrorCode ierr;
>     DM dmplex, dmplex_dist;
>
>
>     /* initialize */
>     PetscInitialize(&argc, &argv, NULL, NULL);
>
>
>     /* get the refinement level and create the mesh */
>     PetscOptionsGetInt(NULL, NULL, "-refine", &refine, NULL);
>     ierr = DMPlexCreateHexCylinderMesh(PETSC_COMM_WORLD, refine,
>         DM_BOUNDARY_NONE, &dmplex); CHKERRQ(ierr);
>     DMView(dmplex, PETSC_VIEWER_STDOUT_WORLD);
>     ierr = DMPlexDistribute(dmplex, 0,
>         NULL, &dmplex_dist); CHKERRQ(ierr);
>     if (dmplex_dist)
>     {
>         DMDestroy(&dmplex);
>         dmplex = dmplex_dist;
>     }
>     DMView(dmplex, PETSC_VIEWER_STDOUT_WORLD);
>     DMDestroy(&dmplex);
>
>     /* finalize */
>
>     PetscFinalize();
>
>     return 0;
> }
>
>
> $ mpirun -n 2 bin/mwe -refine 2
>
> DM Object: 2 MPI processes
>   type: plex
> DM_0x1bde850_0 in 3 dimensions:
>   0-cells: 445 445
>   1-cells: 1196 1196
>   2-cells: 1072 1072
>   3-cells: 320 320
> Labels:
>   depth: 4 strata with value/size (0 (445), 1 (1196), 2 (1072), 3 (320))
> DM Object: Parallel Mesh 2 MPI processes
>   type: plex
> Parallel Mesh in 3 dimensions:
>   0-cells: 445 445
>   1-cells: 1196 1196
>   2-cells: 1072 1072
>   3-cells: 320 320
> Labels:
>   depth: 4 strata with value/size (0 (445), 1 (1196), 2 (1072), 3 (320))
>
> $ mpirun -n 4 bin/mwe -refine 2
>
> DM Object: 4 MPI processes
>   type: plex
> DM_0x20c7790_0 in 3 dimensions:
>   0-cells: 445 445 445 445
>   1-cells: 1196 1196 1196 1196
>   2-cells: 1072 1072 1072 1072
>   3-cells: 320 320 320 320
> Labels:
>   depth: 4 strata with value/size (0 (445), 1 (1196), 2 (1072), 3 (320))
> DM Object: Parallel Mesh 4 MPI processes
>   type: plex
> Parallel Mesh in 3 dimensions:
>   0-cells: 445 445 445 445
>   1-cells: 1196 1196 1196 1196
>   2-cells: 1072 1072 1072 1072
>   3-cells: 320 320 320 320
> Labels:
>   depth: 4 strata with value/size (0 (445), 1 (1196), 2 (1072), 3 (320))
>
>
> Ellen Price
>
>
> On 03/25/2018 07:29 PM, Matthew Knepley wrote:
> > On Sun, Mar 25, 2018 at 6:32 PM, Ellen M. Price
> > <ellen.price at cfa.harvard.edu <mailto:ellen.price at cfa.harvard.edu>>
> wrote:
> >
> >     I am trying to understand some unusual behavior (at least, given my
> >     current understanding) in DMPlexDistribute. I have created a hex mesh
> >     and distributed it using the following snippet:
> >
> >     /* "-refine" is an option set at runtime */
> >     PetscOptionsGetInt(NULL, NULL, "-refine", &refine, NULL);
> >     ierr = DMPlexCreateHexCylinderMesh(PETSC_COMM_WORLD, refine,
> >         DM_BOUNDARY_NONE, &dmplex); CHKERRQ(ierr);
> >     DMView(dmplex, PETSC_VIEWER_STDOUT_WORLD);
> >
> >     /* this is from the examples */
> >     ierr = DMPlexDistribute(dmplex, 0, NULL, &dmplex_dist);
> CHKERRQ(ierr);
> >
> >     if (dmplex_dist)
> >
> >     {
> >
> >         DMDestroy(&dmplex);
> >
> >         dmplex = dmplex_dist;
> >
> >     }
> >
> >     DMView(dmplex, PETSC_VIEWER_STDOUT_WORLD);
> >
> >     So I do a view operation before and after the distribute call to see
> how
> >     the DM is structured. I do not understand what happens next:
> >
> >     $ mpirun -n 4 ./myprogram -refine 2
> >
> >     DM Object: 4 MPI processes
> >       type: plex
> >     DM_0x1f24d50_0 in 3 dimensions:
> >       0-cells: 445 445 445 445
> >       1-cells: 1196 1196 1196 1196
> >       2-cells: 1072 1072 1072 1072
> >       3-cells: 320 320 320 320
> >     Labels:
> >       depth: 4 strata with value/size (0 (445), 1 (1196), 2 (1072), 3
> (320))
> >     DM Object: Parallel Mesh 4 MPI processes
> >       type: plex
> >     Parallel Mesh in 3 dimensions:
> >       0-cells: 445 445 445 445
> >       1-cells: 1196 1196 1196 1196
> >       2-cells: 1072 1072 1072 1072
> >       3-cells: 320 320 320 320
> >     Labels:
> >       depth: 4 strata with value/size (0 (445), 1 (1196), 2 (1072), 3
> (320))
> >
> >     No matter what I choose for the number of processors, every processor
> >     has a copy of all 320 cells (at this refinement level). Similar
> behavior
> >     at other refinement levels. The only thing that is changed is the
> name
> >     of the DM, to "Parallel Mesh". This is not what I would have expected
> >     given the description of DMPlexDistribute in the manual; I thought
> the
> >     cells would be split up between all available processors.
> >
> >     I am also viewing this mesh in VTK and have noticed that the file
> size
> >     of the output scales with the number of processors, as if it is
> really
> >     writing each "copy" of the mesh and data stored in it to one big
> file.
> >     Again, not what I expected.
> >
> >     Can someone clear up what is going on?
> >
> >
> > It definitely does not work that way. Can you send the whole code you
> > are running?
> > Its hard to tell where the problem might be from snippets. You could
> > also try running
> > an example. For instance,
> >
> >   cd $PETSC_DIR/src/snes/examples/tutorials
> >   make ex13
> >
> > master *$:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$ ./ex13
> > -cells 8,8 -dm_view
> >
> > DM Object: Mesh 1 MPI processes
> >
> >   type: plex
> >
> > Mesh in 2 dimensions:
> >
> >   0-cells: 81
> >
> >   1-cells: 208
> >
> >   2-cells: 128
> >
> > Labels:
> >
> >   Face Sets: 1 strata with value/size (1 (32))
> >
> >   marker: 1 strata with value/size (1 (64))
> >
> >   depth: 3 strata with value/size (0 (81), 1 (208), 2 (128))
> >
> > master *$:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$
> > $PETSC_DIR/arch-master-debug/bin/mpiexec -n 4 ./ex13 -cells 8,8 -dm_view
> >
> > DM Object: Mesh 4 MPI processes
> >
> >   type: plex
> >
> > Mesh in 2 dimensions:
> >
> >   0-cells: 26 26 27 25
> >
> >   1-cells: 57 57 58 56
> >
> >   2-cells: 32 32 32 32
> >
> > Labels:
> >
> >   Face Sets: 1 strata with value/size (1 (8))
> >
> >   marker: 1 strata with value/size (1 (17))
> >
> >   depth: 3 strata with value/size (0 (26), 1 (57), 2 (32))
> >
> > master *$:/PETSc3/petsc/petsc-dev/src/snes/examples/tutorials$
> > $PETSC_DIR/arch-master-debug/bin/mpiexec -n 7 ./ex13 -cells 8,8 -dm_view
> >
> > DM Object: Mesh 7 MPI processes
> >
> >   type: plex
> >
> > Mesh in 2 dimensions:
> >
> >   0-cells: 17 16 16 17 17 17 19
> >
> >   1-cells: 34 33 33 35 34 35 35
> >
> >   2-cells: 18 18 18 19 18 19 18
> >
> > Labels:
> >
> >   Face Sets: 1 strata with value/size (1 (5))
> >
> >   marker: 1 strata with value/size (1 (11))
> >
> >   depth: 3 strata with value/size (0 (17), 1 (34), 2 (18))
> >
> >
> >   Thanks,
> >
> >     Matt
> >
> >
> >
> >     Ellen Price
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> > their experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
>



-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.caam.rice.edu/~mk51/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180325/1d9bd54b/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: tester.c
Type: application/octet-stream
Size: 1030 bytes
Desc: not available
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20180325/1d9bd54b/attachment.obj>


More information about the petsc-users mailing list