<div dir="ltr">Hi everyone, <div><br></div><div>I have continued working on my problem, and I have made some progress ... although it is not working in parallel as well as I would like yet. </div><div>I have troubles with the normals at the faces : it appears that for a given mesh, the normals at the faces depend on the partitioning, especially for those faces that belong to the boundary between two MPI-proc domains. If I read correctly everything I printed out to try and understand, it seems that those faces have normals pointing outside of the MPI-proc domain, no matter what their normal was in the sequential version.</div><div><br></div><div>My problem is that my algorithms heavily rely on those normals, and they have to have consistent orientations ... how can I recover for a given face belonging to a given MPI-process the normal it would have had in sequential ?</div><div><br></div><div>Thank you very much in advance !!</div><div><br clear="all"><div><div dir="ltr" class="gmail_signature" data-smartmail="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div><div><div><div><div>Thibault</div></div></div></div></div></div></div></div></div></div></div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le mer. 13 janv. 2021 à 18:36, Thibault Bridel-Bertomeu <<a href="mailto:thibault.bridelbertomeu@gmail.com">thibault.bridelbertomeu@gmail.com</a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">Hello Matt, <div><br></div><div>Thank you for the follow up.</div><div>Here are the print out for the two DMs as you suggested (the smallest tetrahedral mesh possible in a cube).</div><div><br></div><div>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">:: [DEBUG] Visualizing DM in console ::<span> </span></span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">DM Object: 2 MPI processes</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>type: plex</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">DM_0x7faa62d416f0_1 in 3 dimensions:</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>0-cells: 14 14</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>1-cells: 49 49</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>2-cells: 60 60</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>3-cells: 36 (12) [12] 36 (12) [12]</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">Labels:</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>depth: 4 strata with value/size (0 (14), 1 (49), 2 (60), 3 (36))</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>celltype: 5 strata with value/size (0 (14), 1 (49), 3 (60), 6 (24), 12 (12))</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>Cell Sets: 1 strata with value/size (7 (24))</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 (4), 6 (4))</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>vtk: 1 strata with value/size (1 (12))</span></p>
<p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>ghost: 2 strata with value/size (2 (12), 1 (27))</span></p><p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><br></span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">:: [DEBUG] Visualizing DM_nodal in console ::<span> </span></span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">DM Object: 2 MPI processes</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>type: plex</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">DM_0x7faa62d416f0_3 in 3 dimensions:</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>0-cells: 14 14</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>1-cells: 49 49</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>2-cells: 60 60</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>3-cells: 36 (12) [12] 36 (12) [12]</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">Labels:</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>depth: 4 strata with value/size (0 (14), 1 (49), 2 (60), 3 (36))</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>celltype: 5 strata with value/size (0 (14), 1 (49), 3 (60), 6 (24), 12 (12))</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>Cell Sets: 1 strata with value/size (7 (24))</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>Face Sets: 6 strata with value/size (1 (4), 2 (4), 3 (4), 4 (4), 5 (4), 6 (4))</span></p><p style="margin:0px;font:11px Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>vtk: 1 strata with value/size (1 (12))</span></p><p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures">
</span></p><p style="margin:0px;font-variant-numeric:normal;font-variant-east-asian:normal;font-stretch:normal;font-size:11px;line-height:normal;font-family:Menlo;color:rgb(0,0,0);background-color:rgb(252,238,207)"><span style="font-variant-ligatures:no-common-ligatures"><span> </span>ghost: 2 strata with value/size (2 (12), 1 (27))</span></p></div><div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div dir="ltr"><div dir="ltr"><div><div><div>As you predicted, the sum of the numbers of local cells is greater than the total number of cells.</div><div><br></div><div>I was doing some more digging, and it appears that I have issues because the normals of some faces attached to the vertices are not populated.</div><div>In other words, "call DMPlexGetGeometryFVM(dm, facegeom, cellgeom, minradius, ierr); CHKERRA(ierr)" yields a facegeom Vec that is a local Vec. For the mesh that are described above, that Vec has a global size of 720 (12 * 60 faces), and also a local size of 720 (12 * 60) because (I think .. ?) the mesh is so small, with the distribution chosen, both processes know the existence of all the entities (vertices, edges, faces & cells).</div><div>However, for, say MPI proc no.1, a lot of places in this Vec are filled with zeros. As I use the normal normalized by the face area, I get NaNs because I am trying to divide by zero.</div><div>Now the question is, a vertex on a given MPI proc knows that it has, say, 6 faces attached, and I can get that number along with the indices of those faces with "call DMPlexGetTransitiveClosure(dmNodal, inode, PETSC_FALSE, vertexclosure, ierr); CHKERRA(ierr)", but then when I fetch the face characteristics in the facegeom Vec, some are not populated (full of zeros). Is this the consequence of what you were explaining, "only the point closure is guaranteed when distributing" ?</div><div>Or is there a way to do some equivalent to GlobalToLocal for that facegeom Vec ?</div><div><br></div><div>Thank you very much for your support,</div><div><br></div><div>Thibault</div></div></div></div></div></div></div></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le mer. 13 janv. 2021 à 15:00, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Tue, Jan 12, 2021 at 4:12 PM Thibault Bridel-Bertomeu <<a href="mailto:thibault.bridelbertomeu@gmail.com" target="_blank">thibault.bridelbertomeu@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div>Good evening Matthew, </div><div><br></div><div>Thank you for your answer, and sorry for the delay in mine ! I am trying to figure things out but when one gets in the distribution of a DMPlex, it becomes quite complex!</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">Le lun. 11 janv. 2021 à 16:24, Matthew Knepley <<a href="mailto:knepley@gmail.com" target="_blank">knepley@gmail.com</a>> a écrit :<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr">On Mon, Jan 11, 2021 at 6:41 AM Thibault Bridel-Bertomeu <<a href="mailto:thibault.bridelbertomeu@gmail.com" target="_blank">thibault.bridelbertomeu@gmail.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Dear all, <div><br></div><div>I haven't been on this mailing list since december, so first of all, happy new year to everyone !</div><div><br></div><div>I am facing another challenge with the DMPlex structure : although I am doing cell-centered finite volume (using PetscFV), I also need to do some manipulations and store information that are vertex-based. </div><div>The problem I have is when I run the code in parallel : when I visualize the nodal data using a vtk viewer, it shows NaN at all the vertices in close proximity to the MPI partition line.</div><div>Here is how I proceed to work on the vertex-based data :</div><div><br></div><div>First comes the creation of the DM that I'll be using for the cell-centered FVM</div><div><br></div><div><font face="tahoma, sans-serif">call DMPlexCreateFromFile(comm, name, PETSC_TRUE, dm, ierr)</font></div><div><font face="tahoma, sans-serif">call DMSetBasicAdjacency(dm, PETSC_TRUE, PETSC_TRUE, ierr)</font></div><div><font face="tahoma, sans-serif">call DMPlexDistribute(dm, 1, PETSC_NULL_SF, dmDist, ierr)</font></div><div><font face="tahoma, sans-serif">...</font></div><div><font face="tahoma, sans-serif">dm = dmDist</font></div><div><font face="tahoma, sans-serif">...</font></div><div><font face="tahoma, sans-serif">call DMConstructGhostCells(dm, ..., ..., dmGhost, ierr)</font></div><div><font face="tahoma, sans-serif">...</font></div><div><font face="tahoma, sans-serif">dm = dmGhost</font></div><div><font face="tahoma, sans-serif">...</font></div><div><br></div><div>I set both boolean to PETSC_TRUE in the SetBasicAdjacency so even in MPI the neighbouring relationships are conserved for all strata. Is that true ? Like, does doing this actually enable to get all the transitive closure of a given vertex, even if part of it is in another MPI block ? </div></div></blockquote><div><br></div><div>What you say should be true, but it is not. When I designed everything, I was very much stuck in the traditional mesh way of looking at things, and made some limiting decisions. You can still get what you want (almost), but you need to distribute the mesh with an overlap of 1. If you do, then you can get the transitive support of vertices.</div></div></div></blockquote><div><br></div><div>I actually already distribute the mesh with an overlap of 1, but the result is the same. Do I need to re-distribute the clone "dmNodal" somehow ? Is the cloning process maybe not preserving the MPI support of the different strata ?</div></div></div></blockquote><div><br></div><div>No. However, it is easy to check what you have using -dm_view. I would encourage you to put</div><div><br></div><div> DMViewFromOptions(dm, NULL, "-dm_view");</div><div> DMViewFromOptions(dmNodal, NULL, "-nodal_dm_view");</div><div><br></div><div>so that you can give those options and send the output (for a very simple mesh) with the emails. What you should</div><div>see is that for both meshes, the number of local cells adds up to more than the number of global cells.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>The problem is basically that many places in the code assume that the overlap always contains the closure of any point in it. It is difficult to be able to switch this assumption to the star (transitive support)so I have not done that yet. </div></div></div></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Then I need a vertex-based DM, so I figure it's pretty much what's done in CreateMassMatrix in TS ex11.c.</div><div><br></div><div><font face="tahoma, sans-serif">call DMClone(dm, dmNodal, ierr)</font></div><div><font face="tahoma, sans-serif">call DMGetCoordinateSection(dm, coordSection, ierr); CHKERRA(ierr)<br>call DMGetCoordinatesLocal(dm, coordinates, ierr); CHKERRA(ierr)<br>call DMSetCoordinateSection(dmNodal, PETSC_DETERMINE, coordSection, ierr);<br>call PetscSectionCreate(PETSC_COMM_WORLD, sectionNodal, ierr); CHKERRA(ierr)<br>call DMPlexGetDepthStratum(dm, 0, vstart, vend, ierr); CHKERRA(ierr)<br>call PetscSectionSetChart(sectionNodal, vstart, vend, ierr); CHKERRA(ierr)<br>do inode = vstart, vend-1<br> ! 3 dof per node, ndim = 3<br>call PetscSectionSetDof(sectionNodal, inode, ndim, ierr); CHKERRA(ierr)<br>end do<br>call PetscSectionSetUp(sectionNodal, ierr); CHKERRA(ierr)<br>call DMSetLocalSection(dmNodal, sectionNodal, ierr); CHKERRA(ierr)<br>call PetscSectionDestroy(sectionNodal, ierr); CHKERRA(ierr)</font><br></div><div><br></div><div>From what I understood, this new dmNodal basically has all the characteristics of the ori dm, except it's primary layer is vertex-based with 3 DOF per vertex.</div></div></blockquote><div><br></div><div>Take out the coordinate stuff above. This is now handled internally in a better way. Other than that, it looks good.</div></div></div></blockquote><div><br></div><div>You mean the SetCoordinateSection is already handled by the cloning process ? </div></div></div></blockquote><div><br></div><div>Yes.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Then I get a local vector and work on it :</div><div><br></div><div><font face="tahoma, sans-serif">call DMGetLocalVector(dmNodal, nodalvec, ierr)</font></div><div><font face="tahoma, sans-serif">call VecGetArrayF90(nodalvec, nodalvecarray, ierr)</font></div><div><font face="tahoma, sans-serif">call DMPlexGetDepthStratum(dm, 0, vstart, vend, ierr)</font></div><div><font face="tahoma, sans-serif">call DMPlexGetDepthStratum(dm, 2, fstart, fend, ierr)<br></font></div><div><font face="tahoma, sans-serif">! For each node ...</font></div><div><font face="tahoma, sans-serif">do inode = vstart, vend-1</font></div><div><font face="tahoma, sans-serif">A = 0 ! matrix, ndim x ndim</font></div><div><font face="tahoma, sans-serif">b = 0 ! vector, ndim</font></div><div><font face="tahoma, sans-serif">call DMPlexGetTransitiveClosure(dmNodal, inode, PETSC_FALSE, vertexclosure, ierr);<br></font></div><div><font face="tahoma, sans-serif">! For each face attached to that node</font></div><div><font face="tahoma, sans-serif">do iface = 1, size(vertexclosure)</font></div><div><font face="tahoma, sans-serif">if ((vertexclosure(iface) < fstart) .or. (vertexclosure(iface) >= fend)) cycle<br></font></div><div><font face="tahoma, sans-serif">A = A + ...</font></div><div><font face="tahoma, sans-serif">b = b + ...</font></div><div><font face="tahoma, sans-serif">end do</font></div><div><font face="tahoma, sans-serif">nodalvecarray(1+(inode-vstart)*ndim:3+(inode-vstart)*ndim) = matmul(A, b)</font></div><div><font face="tahoma, sans-serif">...</font></div><div><font face="tahoma, sans-serif">end do</font></div></div></blockquote><div><br></div><div>There are some subtleties here I think. You are looping over all vertices, but you only own some of them.</div></div></div></blockquote><div><br></div><div>Here I am quite surprised by your remark : from the random print I did, the DMPlexGetDepthStratum returns different vstart and vend for different MPI processes, hence I assumed it was a local numbering.</div></div></div></blockquote><div><br></div><div>It is a local numbering. If you have a mesh partition, it is usually to have vertices present in multiple local meshes. We think of one process as "owning" the vertex. That is what I meant here.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> And then I assumed that, with the DMPlexDistribute, it ensured that the "vertexclosure"</div></div></div></blockquote><div><br></div><div>This is the wrong terminology. Closures mean including the boundary of something. Vertices have no boundary. You want the dual notion "support", or its transitive closure "star". So is the vertex star included when distributing? No, only the point closure is guaranteed when distributing. This was the point of my explanation. However, if you request overlap = 1, then you get everything in the star of the original boundary.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> contained (among others) all the faces connected to each "inode", even if the closure is spread among multiple MPI processes. So you mean that all these assumptions are actually wrong to some degree ?</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>What do you do when putting these values in the global vector? I think there are at least three choices:</div><div><br></div><div>1) Everyone has everything, so you compute all the vertex stuff redundantly, and then use INSERT_VALUES for the scatter</div><div><br></div><div>2) Only vertices are redundant, so the transitive supports of vertices are partial but non-overlapping, and you use ADD_VALUES for the scatter</div><div><br></div><div>3) Only cells are redundant, so you do the complete vertex computation only for local vertices</div></div></div></blockquote><div><br></div><div>I distributed with overlap = 1, so I expect I have a halo of redundant cells in each MPI block (option no. 3). </div></div></div></blockquote><div><br></div><div>Yes.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div>With the (..., PETSC_TRUE, PETSC_TRUE, ...) setting of the distribute however, I also expected to have a halo of vertices and a halo of edges and a halo of faces. Therefore, completing the vertex computation only for local vertices should have been fine since anyways even local vertices know their complete closure in terms of faces,</div></div></div></blockquote><div><br></div><div>Yes, this should work.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> but I am missing something because it does not turn out right ...</div></div></div></blockquote><div><br></div><div>Start with a very simple mesh, say 2 cells or 4 cells. Then you can print everything out.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Then I want to visualize it :</div><div><br></div><div><font face="tahoma, sans-serif">call VecRestoreArrayF90(nodalvec, nodalvecarray, ierr)</font></div><div><font face="tahoma, sans-serif">call DMCreateGlobalVector(dmNodal, globalnodalvec, ierr)</font></div><div><font face="tahoma, sans-serif">call DMLocalToGlobal(dmNodal, nodalvec, INSERT_VALUES, globalnodalvec, ierr)</font></div><div><font face="tahoma, sans-serif">call PetscViewerCreate(PETSC_COMM_WORLD, vtkViewer, ierr); CHKERRA(ierr)<br>call PetscViewerSetType(vtkViewer, PETSCVIEWERVTK, ierr); CHKERRA(ierr)<br>call PetscViewerFileSetName(vtkViewer, "toto.vtk", ierr); CHKERRA(ierr)<br>call VecView(globalnodalvec, vtkViewer, ierr); CHKERRA(ierr)<br>call PetscViewerDestroy(vtkViewer, ierr); CHKERRA(ierr)</font><br></div><div><br></div><div>In the toto.vtk file, I get vertex-based values as expected, and the whole shebang works in sequential. Now when I run it in parallel, I get vertex-based values inside the MPI blocks, but in the neighbourhood of the join between the MPI blocks, all the vertex-based values are NaN (see attachment).</div></div></blockquote><div><br></div><div>If you have redundant cells, meaning you distributed with overlap = 1, did you remember to full up the values for those ghost cells using</div><div>DMGlobalToLocal() or something like that? If not, they will start as NaNs.</div></div></div></blockquote><div><br></div><div>Indeed, in the vertex computation, at some point I use the values in the cells. But I got the solution vector from TSGetSolution, then I do a GlobalToLocal, then I even add a InsertBoundaryValues to fill the ghost cells correctly.</div><div>Even by doing this, when I print the nodalvecarray on each process, they somehow contain some NaNs. This means, I think, that at some point, a given vertex tries to get the information of a face that is supposed to be in its closure but is actually not .. (?)</div><div><br></div><div>Thank you for your help and feedback !</div><div><br></div><div>Cheers, </div><div><br></div><div>Thibault</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_quote"><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>All in all, I think I do not understand fully the consequences of the DMPlexDistribute with useCone=true & useClosure=true ... but I cannot figure out where the NaN are coming from : is it because the vertices do not actually know all their closure through the MPI join, or is it some Global / Local error I made ?</div><div><br></div><div>Thank you very much in advance for your support !!</div><div><br clear="all"><div><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div dir="ltr"><div><div><div><div><div>Thibault</div></div></div></div></div></div></div></div></div></div></div></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div><div><br></div><div><a href="http://www.cse.buffalo.edu/~knepley/" target="_blank">https://www.cse.buffalo.edu/~knepley/</a><br></div></div></div></div></div></div></div></div>
</blockquote></div></div>
</blockquote></div>