[petsc-users] DMPlex with dof in cells AND at nodes
Thibault Bridel-Bertomeu
thibault.bridelbertomeu at gmail.com
Tue Jan 12 15:12:42 CST 2021
Good evening Matthew,
Thank you for your answer, and sorry for the delay in mine ! I am trying to
figure things out but when one gets in the distribution of a DMPlex, it
becomes quite complex!
Le lun. 11 janv. 2021 à 16:24, Matthew Knepley <knepley at gmail.com> a écrit :
> On Mon, Jan 11, 2021 at 6:41 AM Thibault Bridel-Bertomeu <
> thibault.bridelbertomeu at gmail.com> wrote:
>
>> Dear all,
>>
>> I haven't been on this mailing list since december, so first of all,
>> happy new year to everyone !
>>
>> I am facing another challenge with the DMPlex structure : although I am
>> doing cell-centered finite volume (using PetscFV), I also need to do some
>> manipulations and store information that are vertex-based.
>> The problem I have is when I run the code in parallel : when I visualize
>> the nodal data using a vtk viewer, it shows NaN at all the vertices in
>> close proximity to the MPI partition line.
>> Here is how I proceed to work on the vertex-based data :
>>
>> First comes the creation of the DM that I'll be using for the
>> cell-centered FVM
>>
>> call DMPlexCreateFromFile(comm, name, PETSC_TRUE, dm, ierr)
>> call DMSetBasicAdjacency(dm, PETSC_TRUE, PETSC_TRUE, ierr)
>> call DMPlexDistribute(dm, 1, PETSC_NULL_SF, dmDist, ierr)
>> ...
>> dm = dmDist
>> ...
>> call DMConstructGhostCells(dm, ..., ..., dmGhost, ierr)
>> ...
>> dm = dmGhost
>> ...
>>
>> I set both boolean to PETSC_TRUE in the SetBasicAdjacency so even in MPI
>> the neighbouring relationships are conserved for all strata. Is that true ?
>> Like, does doing this actually enable to get all the transitive closure of
>> a given vertex, even if part of it is in another MPI block ?
>>
>
> What you say should be true, but it is not. When I designed everything, I
> was very much stuck in the traditional mesh way of looking at things, and
> made some limiting decisions. You can still get what you want (almost), but
> you need to distribute the mesh with an overlap of 1. If you do, then you
> can get the transitive support of vertices.
>
I actually already distribute the mesh with an overlap of 1, but the result
is the same. Do I need to re-distribute the clone "dmNodal" somehow ? Is
the cloning process maybe not preserving the MPI support of the different
strata ?
>
> The problem is basically that many places in the code assume that the
> overlap always contains the closure of any point in it. It is difficult to
> be able to switch this assumption to the star (transitive support)so I have
> not done that yet.
>
>
>> Then I need a vertex-based DM, so I figure it's pretty much what's done
>> in CreateMassMatrix in TS ex11.c.
>>
>> call DMClone(dm, dmNodal, ierr)
>> call DMGetCoordinateSection(dm, coordSection, ierr); CHKERRA(ierr)
>> call DMGetCoordinatesLocal(dm, coordinates, ierr); CHKERRA(ierr)
>> call DMSetCoordinateSection(dmNodal, PETSC_DETERMINE, coordSection, ierr);
>> call PetscSectionCreate(PETSC_COMM_WORLD, sectionNodal, ierr);
>> CHKERRA(ierr)
>> call DMPlexGetDepthStratum(dm, 0, vstart, vend, ierr); CHKERRA(ierr)
>> call PetscSectionSetChart(sectionNodal, vstart, vend, ierr); CHKERRA(ierr)
>> do inode = vstart, vend-1
>> ! 3 dof per node, ndim = 3
>> call PetscSectionSetDof(sectionNodal, inode, ndim, ierr); CHKERRA(ierr)
>> end do
>> call PetscSectionSetUp(sectionNodal, ierr); CHKERRA(ierr)
>> call DMSetLocalSection(dmNodal, sectionNodal, ierr); CHKERRA(ierr)
>> call PetscSectionDestroy(sectionNodal, ierr); CHKERRA(ierr)
>>
>> From what I understood, this new dmNodal basically has all the
>> characteristics of the ori dm, except it's primary layer is vertex-based
>> with 3 DOF per vertex.
>>
>
> Take out the coordinate stuff above. This is now handled internally in a
> better way. Other than that, it looks good.
>
You mean the SetCoordinateSection is already handled by the cloning process
?
>
>
>> Then I get a local vector and work on it :
>>
>> call DMGetLocalVector(dmNodal, nodalvec, ierr)
>> call VecGetArrayF90(nodalvec, nodalvecarray, ierr)
>> call DMPlexGetDepthStratum(dm, 0, vstart, vend, ierr)
>> call DMPlexGetDepthStratum(dm, 2, fstart, fend, ierr)
>> ! For each node ...
>> do inode = vstart, vend-1
>> A = 0 ! matrix, ndim x ndim
>> b = 0 ! vector, ndim
>> call DMPlexGetTransitiveClosure(dmNodal, inode, PETSC_FALSE,
>> vertexclosure, ierr);
>> ! For each face attached to that node
>> do iface = 1, size(vertexclosure)
>> if ((vertexclosure(iface) < fstart) .or. (vertexclosure(iface) >= fend))
>> cycle
>> A = A + ...
>> b = b + ...
>> end do
>> nodalvecarray(1+(inode-vstart)*ndim:3+(inode-vstart)*ndim) = matmul(A, b)
>> ...
>> end do
>>
>
> There are some subtleties here I think. You are looping over all vertices,
> but you only own some of them.
>
Here I am quite surprised by your remark : from the random print I did, the
DMPlexGetDepthStratum returns different vstart and vend for different MPI
processes, hence I assumed it was a local numbering. And then I assumed
that, with the DMPlexDistribute, it ensured that the "vertexclosure"
contained (among others) all the faces connected to each "inode", even if
the closure is spread among multiple MPI processes. So you mean that all
these assumptions are actually wrong to some degree ?
> What do you do when putting these values in the global vector? I think
> there are at least three choices:
>
> 1) Everyone has everything, so you compute all the vertex stuff
> redundantly, and then use INSERT_VALUES for the scatter
>
> 2) Only vertices are redundant, so the transitive supports of vertices are
> partial but non-overlapping, and you use ADD_VALUES for the scatter
>
> 3) Only cells are redundant, so you do the complete vertex computation
> only for local vertices
>
I distributed with overlap = 1, so I expect I have a halo of redundant
cells in each MPI block (option no. 3). With the (..., PETSC_TRUE,
PETSC_TRUE, ...) setting of the distribute however, I also expected to have
a halo of vertices and a halo of edges and a halo of faces. Therefore,
completing the vertex computation only for local vertices should have been
fine since anyways even local vertices know their complete closure in terms
of faces, but I am missing something because it does not turn out right ...
>
>
>> Then I want to visualize it :
>>
>> call VecRestoreArrayF90(nodalvec, nodalvecarray, ierr)
>> call DMCreateGlobalVector(dmNodal, globalnodalvec, ierr)
>> call DMLocalToGlobal(dmNodal, nodalvec, INSERT_VALUES, globalnodalvec,
>> ierr)
>> call PetscViewerCreate(PETSC_COMM_WORLD, vtkViewer, ierr); CHKERRA(ierr)
>> call PetscViewerSetType(vtkViewer, PETSCVIEWERVTK, ierr); CHKERRA(ierr)
>> call PetscViewerFileSetName(vtkViewer, "toto.vtk", ierr); CHKERRA(ierr)
>> call VecView(globalnodalvec, vtkViewer, ierr); CHKERRA(ierr)
>> call PetscViewerDestroy(vtkViewer, ierr); CHKERRA(ierr)
>>
>> In the toto.vtk file, I get vertex-based values as expected, and the
>> whole shebang works in sequential. Now when I run it in parallel, I get
>> vertex-based values inside the MPI blocks, but in the neighbourhood of the
>> join between the MPI blocks, all the vertex-based values are NaN (see
>> attachment).
>>
>
> If you have redundant cells, meaning you distributed with overlap = 1, did
> you remember to full up the values for those ghost cells using
> DMGlobalToLocal() or something like that? If not, they will start as NaNs.
>
Indeed, in the vertex computation, at some point I use the values in the
cells. But I got the solution vector from TSGetSolution, then I do a
GlobalToLocal, then I even add a InsertBoundaryValues to fill the ghost
cells correctly.
Even by doing this, when I print the nodalvecarray on each process, they
somehow contain some NaNs. This means, I think, that at some point, a given
vertex tries to get the information of a face that is supposed to be in its
closure but is actually not .. (?)
Thank you for your help and feedback !
Cheers,
Thibault
>
> Thanks,
>
> Matt
>
>
>> All in all, I think I do not understand fully the consequences of the
>> DMPlexDistribute with useCone=true & useClosure=true ... but I cannot
>> figure out where the NaN are coming from : is it because the vertices do
>> not actually know all their closure through the MPI join, or is it some
>> Global / Local error I made ?
>>
>> Thank you very much in advance for your support !!
>>
>> Thibault
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20210112/6f66e342/attachment.html>
More information about the petsc-users
mailing list