[petsc-users] Domain decomposition using DMPLEX
Matthew Knepley
knepley at gmail.com
Mon Nov 25 21:54:44 CST 2019
On Mon, Nov 25, 2019 at 6:25 PM Swarnava Ghosh <swarnava89 at gmail.com> wrote:
> Dear PETSc users and developers,
>
> I am working with dmplex to distribute a 3D unstructured mesh made of
> tetrahedrons in a cuboidal domain. I had a few queries:
> 1) Is there any way of ensuring load balancing based on the number of
> vertices per MPI process.
>
You can now call DMPlexRebalanceSharedPoints() to try and get better
balance of vertices.
> 2) As the global domain is cuboidal, is the resulting domain decomposition
> also cuboidal on every MPI process? If not, is there a way to ensure this?
> For example in DMDA, the default domain decomposition for a cuboidal domain
> is cuboidal.
>
It sounds like you do not want something that is actually unstructured.
Rather, it seems like you want to
take a DMDA type thing and split it into tets. You can get a cuboidal
decomposition of a hex mesh easily.
Call DMPlexCreateBoxMesh() with one cell for every process, distribute, and
then uniformly refine. This
will not quite work for tets since the mesh partitioner will tend to
violate that constraint. You could:
a) Prescribe the distribution yourself using the Shell partitioner type
or
b) Write a refiner that turns hexes into tets
We already have a refiner that turns tets into hexes, but we never wrote
the other direction because it was not clear
that it was useful.
Thanks,
Matt
> Sincerely,
> SG
>
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20191125/4eaaafe4/attachment.html>
More information about the petsc-users
mailing list