[petsc-users] DMPlex distribution with custom adjacency
Lawrence Mitchell
lawrence.mitchell at imperial.ac.uk
Mon Jun 5 12:18:52 CDT 2017
> On 2 Jun 2017, at 05:09, Matthew Knepley <knepley at gmail.com> wrote:
>
>
> Coming back to this, I think I understand the problem a little better.
>
> Consider this mesh:
>
> +----+
> |\ 3 |
> | \ |
> |2 \ |
> | \|
> +----+
> |\ 1 |
> | \ |
> |0 \ |
> | \|
> +----+
>
> Let's say I run on 3 processes and the initial (non-overlapped) cell
> partition is:
>
> rank 0: cell 0
> rank 1: cell 1 & 2
> rank 2: cell 3
>
> Now I'd like to grow the overlap such that any cell I can see through
> a facet (and its closure) lives in the overlap.
>
> So great, I just need a new adjacency relation that gathers
> closure(support(point))
>
> But, that isn't right, because now on rank 0, I will get a mesh that
> looks like:
>
> I do not understand why you think its not right. Toby and I are trying to push a formalism for
> this understanding, in https://arxiv.org/abs/1508.02470. So you say that if sigma is a dual
> basis function associated with point p, then the support of its matching psi, sigma(psi) = 1
> in the biorthogonal bases, is exactly star(p).
>
> So, if you have no sigma sitting on your vertex, who cares if you put that extra edge and node
> in. It will not affect the communication pattern for dofs. If you do, then shouldn't you be including
> that edge?
Hmm, I think we are talking at cross-purposes. Let me try and explain again where I am coming from:
To do a FEM integral on some cell c I need:
i) to evaluate coefficients at quadrature points (on the cell)
ii) to evaluate basis functions at quadrature points (on the cell)
for (i), I need all the dofs in closure(c).
for (ii), I just need the definition of the finite element.
To do a FEM integral on a facet f I need:
i) to evaluate coefficients at quadrature points (on the facet)
ii) to evaluate basis functions at quadrature points (on the facet)
for (i), I need all the dofs in closure(support(f)).
for (ii), I just need the definition of the finite element.
So now, my model for how I want to global assembly of a facet integral is:
loop over all facets:
gather from global coefficient to local data
evaluate coefficient at quad points
perform integral
local to global
In parallel, I just make a partition of the facets (so that each facet is integrated exactly once).
OK, so what data do I need in parallel?
Exactly the dofs that correspond to closure(support(facet)) for all owned facets in my partition.
So I was hoping to be able to grow a distributed cell partition by exactly that means: add in those remote cells which are in the support of owned facets (I'm happy if this is symmetrically grown, although I think I can do it with one-sided growth).
So that's my rationale for wanting this "strange" adjacency. I can get enough adjacency by using the current FEM adjacency and filtering which entities I iterate over, but it seems a bit wasteful.
For the currently implemented adjacencies, however, the code definitely assumes in various places that the closure of the cells on a partition covers all the points. And doing, say, DMPlexSetAdjacencyUseCone/Closure(PETSC_FALSE) results in meshes where that is not true.
See the below:
Lawrence
$ mpiexec -n 3 ./bork
[0] face 7 has support: 0
[0] face 8 has support: 0
[0] face 9 has support: 0 1
[0] face 10 has support: 1
[0] face 11 has support: 1
[0] face 12 has support:
[1] face 10 has support: 0 2
[1] face 11 has support: 0 1
[1] face 12 has support: 0
[1] face 13 has support: 1
[1] face 14 has support: 2
[1] face 15 has support: 2
[1] face 16 has support: 1 3
[1] face 17 has support: 3
[1] face 18 has support: 3
[2] face 7 has support: 0 1
[2] face 8 has support: 0
[2] face 9 has support: 0
[2] face 10 has support: 1
[2] face 11 has support:
[2] face 12 has support: 1[2]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[2]PETSC ERROR: Petsc has generated inconsistent data
[2]PETSC ERROR: Number of depth 2 faces 6 does not match permuted nubmer 5
[2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[2]PETSC ERROR: Petsc Development GIT revision: v3.7.6-3857-gda66ab19e3 GIT Date: 2017-05-10 09:02:09 -0500
[2]PETSC ERROR: ./bork on a arch-darwin-c-opt named yam-laptop.local by lmitche1 Mon Jun 5 18:15:31 2017
[2]PETSC ERROR: Configure options --download-chaco=1 --download-ctetgen=1 --download-eigen --download-exodusii=1 --download-hdf5=1 --download-hypre=1 --download-metis=1 --download-ml=1 --download-mumps=1 --download-netcdf=1 --download-parmetis=1 --download-ptscotch=1 --download-scalapack=1 --download-triangle=1 --with-c2html=0 --with-debugging=0 --with-shared-libraries=1 PETSC_ARCH=arch-darwin-c-opt
[2]PETSC ERROR: #1 DMPlexCreateOrderingClosure_Static() line 41 in /Users/lmitche1/Documents/work/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[0]PETSC ERROR: --------------------- Error Message --------------------------------------------------------------
[0]PETSC ERROR: Petsc has generated inconsistent data
[0]PETSC ERROR: Number of depth 2 faces 6 does not match permuted nubmer 5
[0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting.
[0]PETSC ERROR: Petsc Development GIT revision: v3.7.6-3857-gda66ab19e3 GIT Date: 2017-05-10 09:02:09 -0500
[0]PETSC ERROR: ./bork on a arch-darwin-c-opt named yam-laptop.local by lmitche1 Mon Jun 5 18:15:31 2017
[0]PETSC ERROR: Configure options --download-chaco=1 --download-ctetgen=1 --download-eigen --download-exodusii=1 --download-hdf5=1 --download-hypre=1 --download-metis=1 --download-ml=1 --download-mumps=1 --download-netcdf=1 --download-parmetis=1 --download-ptscotch=1 --download-scalapack=1 --download-triangle=1 --with-c2html=0 --with-debugging=0 --with-shared-libraries=1 PETSC_ARCH=arch-darwin-c-opt
[0]PETSC ERROR: #1 DMPlexCreateOrderingClosure_Static() line 41 in /Users/lmitche1/Documents/work/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[0]PETSC ERROR: #2 DMPlexGetOrdering() line 133 in /Users/lmitche1/Documents/work/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[0]PETSC ERROR: #3 main() line 87 in /Users/lmitche1/Documents/work/src/petsc-doodles/bork.c
[0]PETSC ERROR: No PETSc Option Table entries
[0]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov----------
[2]PETSC ERROR: #2 DMPlexGetOrdering() line 133 in /Users/lmitche1/Documents/work/src/deps/petsc/src/dm/impls/plex/plexreorder.c
[2]PETSC ERROR: #3 main() line 87 in /Users/lmitche1/Documents/work/src/petsc-doodles/bork.c
[2]PETSC ERROR: No PETSc Option Table entries
[2]PETSC ERROR: ----------------End of Error Message -------send entire error message to petsc-maint at mcs.anl.gov----------
application called MPI_Abort(MPI_COMM_WORLD, 77) - process 0
#include <petsc.h>
#include <petscsf.h>
int main(int argc, char **argv)
{
PetscErrorCode ierr;
DM dm = NULL, dmParallel = NULL;
PetscSF sf = NULL;
PetscPartitioner partitioner = NULL;
IS perm = NULL;
const PetscReal coords[12] = {0, 0,
0, 0.5,
0, 1,
1, 0,
1, 0.5,
1, 1};
const PetscInt cells[12] = {0, 1, 3,
1, 4, 3,
1, 2, 4,
2, 5, 4};
const PetscInt sizes[3] = {1, 2, 1};
const PetscInt points[4] = {0, 1, 2, 3};
PetscInt fStart, fEnd;
PetscMPIInt rank, size;
MPI_Comm comm;
PetscInitialize(&argc, &argv, NULL, NULL);
comm = PETSC_COMM_WORLD;
ierr = MPI_Comm_rank(comm, &rank); CHKERRQ(ierr);
ierr = MPI_Comm_size(comm, &size); CHKERRQ(ierr);
if (size != 3) SETERRQ(comm, PETSC_ERR_ARG_WRONG, "Requires 3 processes");
if (!rank) {
ierr = DMPlexCreateFromCellList(comm, 2, 4, 6, 3, PETSC_TRUE,
cells, 2, coords, &dm); CHKERRQ(ierr);
} else {
ierr = DMPlexCreateFromCellList(comm, 2, 0, 0, 3, PETSC_TRUE,
NULL, 2, NULL, &dm); CHKERRQ(ierr);
}
ierr = PetscPartitionerCreate(comm, &partitioner); CHKERRQ(ierr);
ierr = PetscPartitionerSetType(partitioner, PETSCPARTITIONERSHELL); CHKERRQ(ierr);
if (!rank) {
ierr = PetscPartitionerShellSetPartition(partitioner, size, sizes, points); CHKERRQ(ierr);
} else {
ierr = PetscPartitionerShellSetPartition(partitioner, 3, NULL, NULL); CHKERRQ(ierr);
}
ierr = DMPlexSetPartitioner(dm, partitioner); CHKERRQ(ierr);
ierr = DMPlexDistribute(dm, 0, &sf, &dmParallel); CHKERRQ(ierr);
ierr = DMDestroy(&dm); CHKERRQ(ierr);
ierr = PetscSFDestroy(&sf); CHKERRQ(ierr);
ierr = PetscPartitionerDestroy(&partitioner); CHKERRQ(ierr);
ierr = DMViewFromOptions(dmParallel, NULL, "-parallel_dm_view"); CHKERRQ(ierr);
ierr = DMPlexSetAdjacencyUseCone(dmParallel, PETSC_FALSE); CHKERRQ(ierr);
ierr = DMPlexSetAdjacencyUseClosure(dmParallel, PETSC_FALSE); CHKERRQ(ierr);
ierr = DMPlexDistributeOverlap(dmParallel, 1, &sf, &dm); CHKERRQ(ierr);
ierr = DMDestroy(&dmParallel); CHKERRQ(ierr);
ierr = PetscSFDestroy(&sf); CHKERRQ(ierr);
ierr = DMViewFromOptions(dm, NULL, "-overlap_dm_view"); CHKERRQ(ierr);
ierr = DMPlexGetHeightStratum(dm, 1, &fStart, &fEnd); CHKERRQ(ierr);
for (PetscInt f = fStart; f < fEnd; f++) {
const PetscInt *support = NULL;
PetscInt supportSize;
ierr = DMPlexGetSupportSize(dm, f, &supportSize); CHKERRQ(ierr);
ierr = DMPlexGetSupport(dm, f, &support); CHKERRQ(ierr);
ierr = PetscSynchronizedPrintf(comm, "[%d] face %d has support:", rank, f); CHKERRQ(ierr);
for (PetscInt p = 0; p < supportSize; p++) {
ierr = PetscSynchronizedPrintf(comm, " %d", support[p]); CHKERRQ(ierr);
}
ierr = PetscSynchronizedPrintf(comm, "\n"); CHKERRQ(ierr);
}
ierr = PetscSynchronizedFlush(comm, PETSC_STDOUT); CHKERRQ(ierr);
ierr = DMPlexGetOrdering(dm, MATORDERINGRCM, NULL, &perm); CHKERRQ(ierr);
if (perm) {
ierr = ISViewFromOptions(perm, NULL, "-ordering_is_view"); CHKERRQ(ierr);
ierr = ISDestroy(&perm); CHKERRQ(ierr);
}
ierr = DMDestroy(&dm); CHKERRQ(ierr);
ierr = PetscFinalize();
return 0;
}
More information about the petsc-users
mailing list