[petsc-users] creation of parallel dmplex from a partitioned mesh
Matthew Knepley
knepley at gmail.com
Mon Mar 1 19:11:53 CST 2021
On Mon, Mar 1, 2021 at 4:41 PM Cameron Smith <smithc11 at rpi.edu> wrote:
> Thank you. That makes sense.
>
> Using DMPlexPointLocalRef from petsc master worked for me; time to debug.
>
Let me know if it turns out that Plex is making an assumption that is
non-intuitive, or if you need something
that would make conversion easier.
Thanks,
Matt
>
> https://www.mcs.anl.gov/petsc/petsc-master/docs/manualpages/DMPLEX/DMPlexPointLocalRef.html
>
> -Cameron
>
> On 2/26/21 8:32 AM, Matthew Knepley wrote:
> > On Thu, Feb 25, 2021 at 4:57 PM Cameron Smith <smithc11 at rpi.edu
> > <mailto:smithc11 at rpi.edu>> wrote:
> >
> > Hello,
> >
> > Bringing this thread back from the dead...
> >
> > We made progress with creation of a distributed dmplex that matches
> our
> > source mesh partition and are in need of help writing values into a
> > vector created from the dmplex object.
> >
> > As discussed previously, we have created a DMPlex instance using:
> >
> > DMPlexCreateFromCellListPetsc(...)
> > DMGetPointSF(...)
> > PetscSFSetGraph(...)
> >
> > which gives us a distribution of mesh vertices and elements in the DM
> > object that matches the element-based partition of our unstructured
> > mesh.
> >
> > We then mark mesh vertices on the geometric model boundary using
> > DMSetLabelValue(...) and a map from our mesh vertices to dmplex
> points
> > (created during dmplex definition of vtx coordinates).
> >
> > Following this, we create a section for vertices:
> >
> > > DMPlexGetDepthStratum(dm, 0, &vStart, &vEnd);
> > > PetscSectionCreate(PetscObjectComm((PetscObject) dm), &s);
> > > PetscSectionSetNumFields(s, 1);
> > > PetscSectionSetFieldComponents(s, 0, 1);
> > > PetscSectionSetChart(s, vStart, vEnd);
> > > for(PetscInt v = vStart; v < vEnd; ++v) {
> > > PetscSectionSetDof(s, v, 1);
> > > PetscSectionSetFieldDof(s, v, 0, 1);
> > > }
> > > PetscSectionSetUp(s);
> > > DMSetLocalSection(dm, s);
> > > PetscSectionDestroy(&s);
> > > DMGetGlobalSection(dm,&s); //update the global section
> >
> > We then try to write values into a local Vec for the on-process
> > vertices
> > (roots and leaves in sf terms) and hit an ordering problem.
> > Specifically, we make the following sequence of calls:
> >
> > DMGetLocalVector(dm,&bloc);
> > VecGetArrayWrite(bloc, &bwrite);
> > //for loop to write values to bwrite
> > VecRestoreArrayWrite(bloc, &bwrite);
> > DMLocalToGlobal(dm,bloc,INSERT_VALUES,b);
> > DMRestoreLocalVector(dm,&bloc);
> >
> >
> > There is an easy way to get diagnostics here. For the local vector
> >
> > DMGetLocalSection(dm, &s);
> > PetscSectionGetOffset(s, v, &off);
> >
> > will give you the offset into the array you got from VecGetArrayWrite()
> > for that vertex. You can get this wrapped up using
> DMPlexPointLocalWrite()
> > which is what I tend to use for this type of stuff.
> >
> > For the global vector
> >
> > DMGetGlobalSection(dm, &gs);
> > PetscSectionGetOffset(gs, v, &off);
> >
> > will give you the offset into the portion of the global array that is
> > stored in this process.
> > If you do not own the values for this vertex, the number is negative,
> > and it is actually
> > -(i+1) if the index i is the valid one on the owning process.
> >
> > Visualizing Vec 'b' in paraview, and the
> > original mesh, tells us that the dmplex topology and geometry (the
> > vertex coordinates) are correct but that the order we write values is
> > wrong (not total garbage... but clearly shifted).
> >
> >
> > We do not make any guarantees that global orders match local orders.
> > However, by default
> > we number up global unknowns in rank order, leaving out the dofs that we
> > not owned.
> >
> > Does this make sense?
> >
> > Thanks,
> >
> > Matt
> >
> > Is there anything obviously wrong in our described approach? I
> suspect
> > the section creation is wrong and/or we don't understand the order of
> > entries in the array returned by VecGetArrayWrite.
> >
> > Please let us know if other info is needed. We are happy to share
> the
> > relevant source code.
> >
> > Thank-you,
> > Cameron
> >
> >
> > On 8/25/20 8:34 AM, Cameron Smith wrote:
> > > On 8/24/20 4:57 PM, Matthew Knepley wrote:
> > >> On Mon, Aug 24, 2020 at 4:27 PM Jed Brown <jed at jedbrown.org
> > <mailto:jed at jedbrown.org>
> > >> <mailto:jed at jedbrown.org <mailto:jed at jedbrown.org>>> wrote:
> > >>
> > >> Cameron Smith <smithc11 at rpi.edu <mailto:smithc11 at rpi.edu>
> > <mailto:smithc11 at rpi.edu <mailto:smithc11 at rpi.edu>>> writes:
> > >>
> > >> > We made some progress with star forest creation but still
> > have
> > >> work to do.
> > >> >
> > >> > We revisited DMPlexCreateFromCellListParallelPetsc(...)
> > and got it
> > >> > working by sequentially partitioning the vertex
> > coordinates across
> > >> > processes to satisfy the 'vertexCoords' argument.
> > Specifically,
> > >> rank 0
> > >> > has the coordinates for vertices with global id 0:N/P-1,
> > rank 1
> > >> has
> > >> > N/P:2*(N/P)-1, and so on (N is the total number of global
> > >> vertices and P
> > >> > is the number of processes).
> > >> >
> > >> > The consequences of the sequential partition of vertex
> > >> coordinates in
> > >> > subsequent solver operations is not clear. Does it make
> > process i
> > >> > responsible for computations and communications
> > associated with
> > >> global
> > >> > vertices i*(N/P):(i+1)*(N/P)-1 ? We assumed it does and
> > wanted
> > >> to confirm.
> > >>
> > >> Yeah, in the sense that the corners would be owned by the
> > rank you
> > >> place them on.
> > >>
> > >> But many methods, especially high-order, perform assembly via
> > >> non-overlapping partition of elements, in which case the
> > >> "computations" happen where the elements are (with any
> required
> > >> vertex data for the closure of those elements being sent to
> > the rank
> > >> handling the element).
> > >>
> > >> Note that a typical pattern would be to create a parallel
> DMPlex
> > >> with a naive distribution, then repartition/distribute it.
> > >>
> > >>
> > >> As Jed says, CreateParallel() just makes the most naive
> > partition of
> > >> vertices because we have no other information. Once
> > >> the mesh is made, you call DMPlexDistribute() again to reduce
> > the edge
> > >> cut.
> > >>
> > >> Thanks,
> > >>
> > >> Matt
> > >>
> > >
> > >
> > > Thank you.
> > >
> > > This is being used for PIC code with low order 2d elements whose
> > mesh is
> > > partitioned to minimize communications during particle
> > operations. This
> > > partition will not be ideal for the field solve using petsc so
> we're
> > > exploring alternatives that will require minimal data movement
> > between
> > > the two partitions. Towards that, we'll keep pursuing the SF
> > creation.
> > >
> > > -Cameron
> > >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their
> > experiments is infinitely more interesting than any results to which
> > their experiments lead.
> > -- Norbert Wiener
> >
> > https://www.cse.buffalo.edu/~knepley/ <
> http://www.cse.buffalo.edu/~knepley/>
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210301/2fc69dcd/attachment-0001.html>
More information about the petsc-users
mailing list