[petsc-users] What is the best way to do domain decomposition with petsc?

Matthew Knepley knepley at gmail.com
Thu Jul 11 09:18:12 CDT 2019


On Thu, Jul 11, 2019 at 9:06 AM Dongyu Liu - CITG via petsc-users <
petsc-users at mcs.anl.gov> wrote:

> Hi,
>
>
> We want to incorporate the PETSc into our in-house FEM package. We found
> two ways to do the domain decomposition.
>
> The first one is to read the mesh with partitioning, and the partitioning
> is done with gmsh. For this one, we need to do the index mapping
> (renumbering). We found that the mesh nodes cannot be shared by each
> processor (Is that true?).
>
The parallel numbering of unknowns for PETSc (Vec and Mat) needs a
contiguous set of non-overlapping indices for each process, so if "nodes"
and "unknowns" are the same here, then no nodes cannot be shared. Plex
treats these things independently.

> Then we need to provide the ghosted points information to the PETSc which
> can be obtained from the gmsh file. To implement this method, we need to
> provide the local and global mapping like AO in PETSc.
>
You do need to provide a L2G mapping in order to give PETSc ghost
information. I do not understand how this is related to AO, which is a
global renumbering.

> We also need to use it with our own mesh data structure.
>
> The second way is to use DMPlex to read the mesh and do the partitioning.
> For this, an interface should be provided to link the DMPlex mesh data
> structure and to our own mesh object to avoid changing too many places in
> our codes.
>
Yes. I think GMsh just calls a common partitioner (Metis?) so you should be
able to replicate it. Also, I think you can use a DMLabel to link the Plex
object with your own mesh, meaning you can associate some integer with any
part of the mesh you want.

> I am just wondering which way is better, or do you have any other
> suggestion?
>
If you plan on doing a lot of mesh manipulation by hand and want to control
everything, the first option might be better.
On the other hand, if you use Plex, you could potentially take advantage of
parallel loading and output, redistribution
and load balancing, adaptive refinement, and interfacing with the
multilevel and block solvers in PETSc.

  Thanks,

     Matt

> Best,
>
> Dongyu
>


-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20190711/f1f2b2a6/attachment-0001.html>


More information about the petsc-users mailing list