<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Tue, Apr 26, 2016 at 11:00 PM, Jie Cheng <span dir="ltr"><<a href="mailto:chengj5@rpi.edu" target="_blank">chengj5@rpi.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">Hello Barry<br>
<br>
It turns out METIS can easily generate the necessary graph information from the type of connectivity file I described with mesh2dual and mesh2nodal commands. But I do not fully understand the trick you talked about:<br>
<br>
Suppose I have 10 elements and I am running my program with 2 MPI ranks.<br>
<br>
1) I should let rank 0 take care of the first to the 5th element, and rank 1 take care of the 6th to the 10th element and let PETSc do the real partitioning?<br>
</blockquote><div><br></div><div>No, Barry says</div><div><br></div><div> "first partition the element across processes, then partition the vertices (nodal values) subservient to the partitioning of the elements"</div><div><br></div><div>meaning only give a vertex to a process if it already owns the element containing this vertex.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">2) I have the necessary graph information, but how do I partition the elements and the vertices? Call MatCreateMPIAdj twice to create two isgs? I do not understand what you meant by “partition the vertices (nodal values) subservient to the partitioning of the elements”.<br></blockquote><div><br></div><div>If you used PETSc to do it, you would create an MPIAdj which the element adjacency information (which I assume you got from mesh2dual).</div><div>We would partition the elements by calling parmetis underneath. Then it would be up to you to partition vertices, just as it is when calling</div><div>parmetis by hand.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
3) To set values in the global stiffness matrix I need to know the global element number and global vertex number, how to get them?<br></blockquote><div><br></div><div>You, of course, start with these numbers. As Barry says, in order to make the dofs contiguous you should renumber the cells and vertices</div><div>so that each process owns a contiguous block.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-style:solid;border-left-color:rgb(204,204,204);padding-left:1ex">
I have to ask these in detail because the manual seems to be too brief about the approach. Thanks for your patience.<br>
<br>
Best<br>
Jie<br>
<br>
<br>
> On Apr 26, 2016, at 7:56 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
><br>
>> On Apr 26, 2016, at 6:50 PM, Jie Cheng <<a href="mailto:chengj5@rpi.edu">chengj5@rpi.edu</a>> wrote:<br>
>><br>
>> Hi Barry<br>
>><br>
>> Thanks for your answer. But given the mesh file I have, how do I partition the elements and vertices? MatCreateMPIAdj requires the graph information on adjacent elements (to set up ia and ja). Do I have to use more sophisticated meshing software to create mesh files that have graph information in it?<br>
><br>
> If you don't have any neighbor information then you cannot do any partitioning. So you need a way to get or create the neighbor information.<br>
><br>
> Barry<br>
><br>
>><br>
>> Jie<br>
>><br>
>>> On Apr 26, 2016, at 2:18 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
>>><br>
>>><br>
>>> The "trick" is that first partition the element across processes, then partition the vertices (nodal values) subservient to the partitioning of the elements and then you "renumber" the elements and vertices so that the elements on the first process are numbered first, followed by the elements on the second process etc and similarly the vertices on the first process are numbered before the vertices on the second processes etc.<br>
>>><br>
>>> Now each process just needs to loop over its elements compute the element stiffness/load and call MatSetValues/VecSetValues() the the "new" numbering of the vertices. The "old" numbering that was on the disk is simply not used in communicating with PETSc, you only use the new PETSc numbering.<br>
>>><br>
>>> Barry<br>
>>><br>
>>>> On Apr 26, 2016, at 1:03 PM, Jie Cheng <<a href="mailto:chengj5@rpi.edu">chengj5@rpi.edu</a>> wrote:<br>
>>>><br>
>>>> Hello everyone<br>
>>>><br>
>>>> I have a finite element code to solve nonlinear solid mechanics (using Newton-Raphson iteration) implemented with PETSc. The code is serial, organized as following:<br>
>>>><br>
>>>> 1) Read the connectivity of the unstructured mesh, coordinates of nodes from individual txt files.<br>
>>>> 1.1) Connectivity file contains [# of elements] rows, and each row lists the global number of nodes on that element. Take 3d hexahedral elements for instance:<br>
>>>> 223 224 298 297 1 2 76 75<br>
>>>> 224 225 299 298 2 3 77 76<br>
>>>> … …<br>
>>>> 1.2) Coordinates file contains [# of nodes] rows, and each row lists the coordinates of a node, for example:<br>
>>>> 0 0.0011 3.9e-5<br>
>>>> 2.3677e-5 0.001.9975 3.9e-5<br>
>>>> … …<br>
>>>><br>
>>>> 2) Create the global stiffness A matrix with MatCreateSeqAIJ since the dimensions and nonzero pattern are known from the connectivity.<br>
>>>> 3) Loop over the element to compute the element stiffness matrix and right hand side. Then assemble the global stiffness matrix and right hand side.<br>
>>>> 4) Solve the linear equation with KSPsolve for the displacement increment, then go back to Step 3.<br>
>>>><br>
>>>> The code works fine in serial, now I’m trying to parallelize it. To partition the mesh, I can use partdmesh from METIS, or let PETSc calls it. Either way I will find a way to assign different elements and nodes to different ranks. My question is: since PETSc does not allow us to control the memory distribution of the parallel matrix/vector, how do I make sure the rank happens to have all/most memory it needs for the specific elements? For example, rank 0 is in charged of element n, and needs to insert values to A[ i ][ j ], how do I make sure the i-th row is assigned to rank 0?<br>
>>>><br>
>>>> This question is fundamental for people work with finite element methods. I checked the tutorial codes but did not find an example to follow. Section 3.5 of the manual talks about partition, but it does not say anything about linking the partition with the memory distribution. Could anyone give me some instructions on this issue? Thank you in advance!<br>
>>>><br>
>>>> Best<br>
>>>> Jie Cheng<br>
>>><br>
>><br>
><br>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener</div>
</div></div>