[petsc-users] Using dmplexdistribute do parallel FEM code.

neil liu liufield at gmail.com
Fri May 19 09:11:38 CDT 2023


Thanks, Matt.

Following your explanations, my understanding is this "If we use multiple
MPI processors, the global numbering of the vertices (global domain) will
be different from that with only one processor, right?  ". If this is the
case, will it be easy for us to check the assembled matrix from
multiple processors by comparing against that of a single processor ?

I checked the DM details for a simple 2D mesh. How could I find (from the
DM view below) the information about the global numbering of a vertex when
I use 2 processors ?
_______
|       |      |
| __  |__  |
|       |      |
|___ |___|

DM Object: Parallel Mesh 2 MPI processes
  type: plex
Parallel Mesh in 2 dimensions:
Supports:
[0] Max support size: 3
[0]: 2 ----> 8
[0]: 2 ----> 12
[0]: 3 ----> 8
[0]: 3 ----> 9
[0]: 3 ----> 13
[0]: 4 ----> 9
[0]: 4 ----> 14
[0]: 5 ----> 10
[0]: 5 ----> 12
[0]: 6 ----> 10
[0]: 6 ----> 11
[0]: 6 ----> 13
[0]: 7 ----> 11
[0]: 7 ----> 14
[0]: 8 ----> 0
[0]: 9 ----> 1
[0]: 10 ----> 0
[0]: 11 ----> 1
[0]: 12 ----> 0
[0]: 13 ----> 0
[0]: 13 ----> 1
[0]: 14 ----> 1
[1] Max support size: 3
[1]: 2 ----> 8
[1]: 2 ----> 12
[1]: 3 ----> 8
[1]: 3 ----> 9
[1]: 3 ----> 13
[1]: 4 ----> 9
[1]: 4 ----> 14
[1]: 5 ----> 10
[1]: 5 ----> 12
[1]: 6 ----> 10
[1]: 6 ----> 11
[1]: 6 ----> 13
[1]: 7 ----> 11
[1]: 7 ----> 14
[1]: 8 ----> 0
[1]: 9 ----> 1
[1]: 10 ----> 0
[1]: 11 ----> 1
[1]: 12 ----> 0
[1]: 13 ----> 0
[1]: 13 ----> 1
[1]: 14 ----> 1
Cones:
[0] Max cone size: 4
[0]: 0 <---- 8 (0)
[0]: 0 <---- 13 (0)
[0]: 0 <---- 10 (-1)
[0]: 0 <---- 12 (-1)
[0]: 1 <---- 9 (0)
[0]: 1 <---- 14 (0)
[0]: 1 <---- 11 (-1)
[0]: 1 <---- 13 (-1)
[0]: 8 <---- 2 (0)
[0]: 8 <---- 3 (0)
[0]: 9 <---- 3 (0)
[0]: 9 <---- 4 (0)
[0]: 10 <---- 5 (0)
[0]: 10 <---- 6 (0)
[0]: 11 <---- 6 (0)
[0]: 11 <---- 7 (0)
[0]: 12 <---- 2 (0)
[0]: 12 <---- 5 (0)
[0]: 13 <---- 3 (0)
[0]: 13 <---- 6 (0)
[0]: 14 <---- 4 (0)
[0]: 14 <---- 7 (0)
[1] Max cone size: 4
[1]: 0 <---- 8 (0)
[1]: 0 <---- 13 (0)
[1]: 0 <---- 10 (-1)
[1]: 0 <---- 12 (-1)
[1]: 1 <---- 9 (0)
[1]: 1 <---- 14 (0)
[1]: 1 <---- 11 (-1)
[1]: 1 <---- 13 (-1)
[1]: 8 <---- 2 (0)
[1]: 8 <---- 3 (0)
[1]: 9 <---- 3 (0)
[1]: 9 <---- 4 (0)
[1]: 10 <---- 5 (0)
[1]: 10 <---- 6 (0)
[1]: 11 <---- 6 (0)
[1]: 11 <---- 7 (0)
[1]: 12 <---- 2 (0)
[1]: 12 <---- 5 (0)
[1]: 13 <---- 3 (0)
[1]: 13 <---- 6 (0)
[1]: 14 <---- 4 (0)
[1]: 14 <---- 7 (0)
coordinates with 1 fields
  field 0 with 2 components
Process 0:
  (   2) dim  2 offset   0 0. 0.
  (   3) dim  2 offset   2 0.5 0.
  (   4) dim  2 offset   4 1. 0.
  (   5) dim  2 offset   6 0. 0.5
  (   6) dim  2 offset   8 0.5 0.5
  (   7) dim  2 offset  10 1. 0.5
Process 1:
  (   2) dim  2 offset   0 0. 0.5
  (   3) dim  2 offset   2 0.5 0.5
  (   4) dim  2 offset   4 1. 0.5
  (   5) dim  2 offset   6 0. 1.
  (   6) dim  2 offset   8 0.5 1.
  (   7) dim  2 offset  10 1. 1.
Labels:
Label 'marker':
[0]: 2 (1)
[0]: 3 (1)
[0]: 4 (1)
[0]: 5 (1)
[0]: 7 (1)
[0]: 8 (1)
[0]: 9 (1)
[0]: 12 (1)
[0]: 14 (1)
[1]: 2 (1)
[1]: 4 (1)
[1]: 5 (1)
[1]: 6 (1)
[1]: 7 (1)
[1]: 10 (1)
[1]: 11 (1)
[1]: 12 (1)
[1]: 14 (1)
Label 'Face Sets':
[0]: 8 (1)
[0]: 9 (1)
[0]: 14 (2)
[0]: 12 (4)
[1]: 14 (2)
[1]: 10 (3)
[1]: 11 (3)
[1]: 12 (4)
Label 'celltype':
[0]: 2 (0)
[0]: 3 (0)
[0]: 4 (0)
[0]: 5 (0)
[0]: 6 (0)
[0]: 7 (0)
[0]: 8 (1)
[0]: 9 (1)
[0]: 10 (1)
[0]: 11 (1)
[0]: 12 (1)
[0]: 13 (1)
[0]: 14 (1)
[0]: 0 (4)
[0]: 1 (4)
[1]: 2 (0)
[1]: 3 (0)
[1]: 4 (0)
[1]: 5 (0)
[1]: 6 (0)
[1]: 7 (0)
[1]: 8 (1)
[1]: 9 (1)
[1]: 10 (1)
[1]: 11 (1)
[1]: 12 (1)
[1]: 13 (1)
[1]: 14 (1)
[1]: 0 (4)
[1]: 1 (4)
PetscSF Object: 2 MPI processes
  type: basic
  [0] Number of roots=15, leaves=5, remote ranks=1
  [0] 5 <- (1,2)
  [0] 6 <- (1,3)
  [0] 7 <- (1,4)
  [0] 10 <- (1,8)
  [0] 11 <- (1,9)
  [1] Number of roots=15, leaves=0, remote ranks=0
  [0] Roots referenced by my leaves, by rank
  [0] 1: 5 edges
  [0]    5 <- 2
  [0]    6 <- 3
  [0]    7 <- 4
  [0]    10 <- 8
  [0]    11 <- 9
  [1] Roots referenced by my leaves, by rank
  MultiSF sort=rank-order
Vec Object: 2 MPI processes



On Thu, May 18, 2023 at 10:02 PM Matthew Knepley <knepley at gmail.com> wrote:

> On Thu, May 18, 2023 at 8:47 PM neil liu <liufield at gmail.com> wrote:
>
>> Thanks, Matt. I am using the following steps to build a local to global
>> mapping.
>> Step 1) PetscSectionCreate ();
>>     PetscSectionSetNumFields();
>>     PetscSectionSetChart ();
>>     //Set dof for each node
>>   PetscSectionSetup (s);
>>  Step 2)
>>   PetscCall(DMGetLocalToGlobalMapping(dm, &ltogm));
>>   PetscCall(ISLocalToGlobalMappingGetIndices(ltogm, &g_idx));
>>
>> For 2D mesh from the example,
>> https://wg-beginners.readthedocs.io/en/latest/tutorials/dm/plex/introductory_tutorial_plex.html
>> the map from rank 0 (MPI) and rank 1(MPI) matches well with the global
>> node ordering.
>>
>> But When I tried a 3D gmsh, (2 hexahedra (12 nodes ) were split into 12
>> tetrahedra). ( 2 processors in total ) Each processor handles 9 nodes
>> separately. When I checked the local to global mapping,  It seems the
>> mapping is not right.
>> For example, the mapping array is
>>  Local Global (left: local node number; right: global node number).
>> provided.
>>  0  0
>>  1 3
>>  2 5
>>  3 1
>>  4 6
>>  5 8
>>  6 2
>>  7 9
>>  8 11
>>   But the coordinates between the local (left) and global nodes (right)
>> are not the same, meaning they are not matching. Did I miss something?
>>
>
> Plex reorders the nodes when it partitions the mesh. Add
>
>   DMViewFromOptions(dm, NULL, "-dm_view");
>
> to your code and
>
>   -dm_view ::ascii_info_detail
>
> to the command line. Then it will print out a mesh description that shows
> you the coordinates
> of the vertices, and you can match them up.
>
>   Thanks,
>
>      Matt
>
>
>> In addition, the simple 3D gmsh file has been  attached if you need.
>> *Gmsh file:*
>> $MeshFormat
>> 4.1 0 8
>> $EndMeshFormat
>> $PhysicalNames
>> 3
>> 2 27 "Excit"
>> 2 28 "PEC"
>> 3 29 "volume"
>> $EndPhysicalNames
>> $Entities
>> 8 12 6 1
>> 1 0 0 0 0
>> 2 0 0.25 0 0
>> 3 0 0.25 0.25 0
>> 4 0 0 0.25 0
>> 5 1 0 0 0
>> 6 1 0.25 0 0
>> 10 1 0.25 0.25 0
>> 14 1 0 0.25 0
>> 1 0 0 0 0 0.25 0 0 2 1 -2
>> 2 0 0.25 0 0 0.25 0.25 0 2 2 -3
>> 3 0 0 0.25 0 0.25 0.25 0 2 3 -4
>> 4 0 0 0 0 0 0.25 0 2 4 -1
>> 6 1 0 0 1 0.25 0 0 2 5 -6
>> 7 1 0.25 0 1 0.25 0.25 0 2 6 -10
>> 8 1 0 0.25 1 0.25 0.25 0 2 10 -14
>> 9 1 0 0 1 0 0.25 0 2 14 -5
>> 11 0 0 0 1 0 0 0 2 1 -5
>> 12 0 0.25 0 1 0.25 0 0 2 2 -6
>> 16 0 0.25 0.25 1 0.25 0.25 0 2 3 -10
>> 20 0 0 0.25 1 0 0.25 0 2 4 -14
>> 1 0 0 0 0 0.25 0.25 1 27 4 1 2 3 4
>> 13 0 0 0 1 0.25 0 1 28 4 1 12 -6 -11
>> 17 0 0.25 0 1 0.25 0.25 0 4 2 16 -7 -12
>> 21 0 0 0.25 1 0.25 0.25 1 28 4 3 20 -8 -16
>> 25 0 0 0 1 0 0.25 0 4 4 11 -9 -20
>> 26 1 0 0 1 0.25 0.25 1 28 4 6 7 8 9
>> 1 0 0 0 1 0.25 0.25 1 29 6 -1 26 13 17 21 25
>> $EndEntities
>> $Nodes
>> 17 12 1 12
>> 0 1 0 1
>> 1
>> 0 0 0
>> 0 2 0 1
>> 2
>> 0 0.25 0
>> 0 3 0 1
>> 3
>> 0 0.25 0.25
>> 0 4 0 1
>> 4
>> 0 0 0.25
>> 0 5 0 1
>> 5
>> 1 0 0
>> 0 6 0 1
>> 6
>> 1 0.25 0
>> 0 10 0 1
>> 7
>> 1 0.25 0.25
>> 0 14 0 1
>> 8
>> 1 0 0.25
>> 1 11 0 1
>> 9
>> 0.5 0 0
>> 1 12 0 1
>> 10
>> 0.5 0.25 0
>> 1 16 0 1
>> 11
>> 0.5 0.25 0.25
>> 1 20 0 1
>> 12
>> 0.5 0 0.25
>> 2 1 0 0
>> 2 13 0 0
>> 2 21 0 0
>> 2 26 0 0
>> 3 1 0 0
>> $EndNodes
>> $Elements
>> 5 24 1 24
>> 2 1 2 2
>> 1 1 2 4
>> 2 4 2 3
>> 2 13 2 4
>> 3 9 1 2
>> 4 9 2 10
>> 5 5 9 10
>> 6 5 10 6
>> 2 21 2 4
>> 7 11 3 4
>> 8 11 4 12
>> 9 7 11 12
>> 10 7 12 8
>> 2 26 2 2
>> 11 5 6 8
>> 12 8 6 7
>> 3 1 4 12
>> 13 1 2 4 12
>> 14 10 9 12 2
>> 15 2 9 12 1
>> 16 9 10 12 8
>> 17 6 5 8 10
>> 18 10 5 8 9
>> 19 4 2 3 11
>> 20 10 12 11 2
>> 21 2 12 11 4
>> 22 12 10 11 7
>> 23 6 8 7 10
>> 24 10 8 7 12
>> $EndElements
>>
>> On Wed, May 17, 2023 at 7:30 PM Matthew Knepley <knepley at gmail.com>
>> wrote:
>>
>>> On Wed, May 17, 2023 at 6:58 PM neil liu <liufield at gmail.com> wrote:
>>>
>>>> Dear Petsc developers,
>>>>
>>>> I am writing my own code to calculate the FEM matrix. The following is
>>>> my general framework,
>>>>
>>>> DMPlexCreateGmsh();
>>>> MPI_Comm_rank (Petsc_comm_world, &rank);
>>>> DMPlexDistribute (.., .., &dmDist);
>>>>
>>>> dm = dmDist;
>>>> //This can create separate dm s for different processors. (reordering.)
>>>>
>>>> MatCreate (Petsc_comm_world, &A)
>>>> // Loop over every tetrahedral element to calculate the local matrix
>>>> for each processor. Then we can get a local matrix A for each processor.
>>>>
>>>> *My question is : it seems we should build a global matrix B (assemble
>>>> all the As for each partition) and then transfer B to KSP. KSP will do the
>>>> parallelization correctly, right? *
>>>>
>>>
>>> I would not suggest this. The more common strategy is to assemble each
>>> element matrix directly into the
>>> global matrix B, by mapping the cell indices directly to global indices
>>> (rather than to local indices in the matrix A). You can do this in two
>>> stages. You can create a LocalToGlobalMapping in PETSc that maps
>>> every local index to a global index. Then you can assemble into B
>>> exactly as you would assemble into A by calling MatSetValuesLocal().
>>>
>>> DMPlex handles these mappings for you automatically, but I realize that
>>> it is a large number of things to buy into.
>>>
>>>   Thanks,
>>>
>>>      Matt
>>>
>>>
>>>> If that is right, I should define a whole domain matrix B before the
>>>> partitioning (MatCreate (Petsc_comm_world, &B); ), and then use
>>>> localtoglobal (which petsc function should I use? Do you have any
>>>> examples.) map to add A to B at the right positions (MatSetValues) ?
>>>>
>>>> Does that make sense?
>>>>
>>>> Thanks,
>>>>
>>>> Xiaodong
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
> https://www.cse.buffalo.edu/~knepley/
> <http://www.cse.buffalo.edu/~knepley/>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230519/f9fb1591/attachment-0001.html>


More information about the petsc-users mailing list