[petsc-users] Nodes coordinates in distributed dmplex
Alejandro D Otero
aotero at fi.uba.ar
Fri Jul 10 06:48:05 CDT 2015
Hi Matt, thanks for your answer.
I got a vector from getCoordinates(). How are their components indexed? is
(p * dim + d) with p: node, dim: of the problem, x_d coordinate of the
Which numbering for p? The local number of node, the number of point in the
DAG of the dm, the original number of node?
I am trying a simple square mesh with 16 4-node square elements parted into
2 process. Total of 25 nodes.
The distributed dm seems alright to me. Each process gets a dm with 8
elements an 15 nodes, which means that the 5 shared nodes are local to each
process. But one of the process gives negative values for the shared nodes.
How need them to be mapped to get the right number.
It seems I'm using a wrong approach to this. May be I need to get the
coordinates in a somehow different way. I'm working on a from-scratch
implementation of a FEM code based on petsc4py. I want to code the problem
matrices assembly from elemental matrices. I've already done this
sequentially, but I got stuck when trying to compute elemental matrices in
parallel because I don't understand the way of obtaining the coordinates of
the nodes in for each element.
Again, thanks for your help,
On Thu, Jul 9, 2015 at 5:18 PM, Matthew Knepley <knepley at gmail.com> wrote:
> On Thu, Jul 9, 2015 at 7:42 AM, Alejandro D Otero <aotero at fi.uba.ar>
>> Hi, sorry if this is an obvious question, but I cannot figure out how to
>> recover finite element nodes coordinates once I have distributed a mesh
>> stored as a dmplex. I am using petsc4py as interface to petsc rutines.
>> I first created a dmplex using:
>> In a sequential run I got the coordinates with:
>> Coords = dm.getCoordinates()
>> which gave a sequential vector with the coordinates of the mesh nodes.
>> When I distribute the mesh with:
>> each mpi process has it own dm but the indexing of the vector resulting
>> from getCoordinates() or getCoordinatesLocal() seems not consistent with
>> the local numbering of the cells and nodes.
> When the mesh is distributed, the vertices are renumbered. Thus the
> coordinates you get out are
> for reordered local vertices, but they are consistent with the local
> topology (cells still contain the
> right vertices) and the overlap mapping (SF still connects the shared
> What do you need it to do?
>> Which is the correct way of doing this in PETSc philosophy?
>> Thanks in advance,
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the petsc-users