[petsc-users] creation of parallel dmplex from a partitioned mesh

Cameron Smith smithc11 at rpi.edu
Tue Aug 25 07:34:45 CDT 2020


On 8/24/20 4:57 PM, Matthew Knepley wrote:
> On Mon, Aug 24, 2020 at 4:27 PM Jed Brown <jed at jedbrown.org 
> <mailto:jed at jedbrown.org>> wrote:
> 
>     Cameron Smith <smithc11 at rpi.edu <mailto:smithc11 at rpi.edu>> writes:
> 
>      > We made some progress with star forest creation but still have
>     work to do.
>      >
>      > We revisited DMPlexCreateFromCellListParallelPetsc(...) and got it
>      > working by sequentially partitioning the vertex coordinates across
>      > processes to satisfy the 'vertexCoords' argument. Specifically,
>     rank 0
>      > has the coordinates for vertices with global id 0:N/P-1, rank 1 has
>      > N/P:2*(N/P)-1, and so on (N is the total number of global
>     vertices and P
>      > is the number of processes).
>      >
>      > The consequences of the sequential partition of vertex
>     coordinates in
>      > subsequent solver operations is not clear.  Does it make process i
>      > responsible for computations and communications associated with
>     global
>      > vertices i*(N/P):(i+1)*(N/P)-1 ?  We assumed it does and wanted
>     to confirm.
> 
>     Yeah, in the sense that the corners would be owned by the rank you
>     place them on.
> 
>     But many methods, especially high-order, perform assembly via
>     non-overlapping partition of elements, in which case the
>     "computations" happen where the elements are (with any required
>     vertex data for the closure of those elements being sent to the rank
>     handling the element).
> 
>     Note that a typical pattern would be to create a parallel DMPlex
>     with a naive distribution, then repartition/distribute it.
> 
> 
> As Jed says, CreateParallel() just makes the most naive partition of 
> vertices because we have no other information. Once
> the mesh is made, you call DMPlexDistribute() again to reduce the edge cut.
> 
>    Thanks,
> 
>       Matt
> 


Thank you.

This is being used for PIC code with low order 2d elements whose mesh is 
partitioned to minimize communications during particle operations.  This 
partition will not be ideal for the field solve using petsc so we're 
exploring alternatives that will require minimal data movement between 
the two partitions.  Towards that, we'll keep pursuing the SF creation.

-Cameron



More information about the petsc-users mailing list