[petsc-users] Using dmplexdistribute do parallel FEM code.

Matthew Knepley knepley at gmail.com
Thu May 18 21:01:57 CDT 2023


On Thu, May 18, 2023 at 8:47 PM neil liu <liufield at gmail.com> wrote:

> Thanks, Matt. I am using the following steps to build a local to global
> mapping.
> Step 1) PetscSectionCreate ();
>     PetscSectionSetNumFields();
>     PetscSectionSetChart ();
>     //Set dof for each node
>   PetscSectionSetup (s);
>  Step 2)
>   PetscCall(DMGetLocalToGlobalMapping(dm, &ltogm));
>   PetscCall(ISLocalToGlobalMappingGetIndices(ltogm, &g_idx));
>
> For 2D mesh from the example,
> https://wg-beginners.readthedocs.io/en/latest/tutorials/dm/plex/introductory_tutorial_plex.html
> the map from rank 0 (MPI) and rank 1(MPI) matches well with the global
> node ordering.
>
> But When I tried a 3D gmsh, (2 hexahedra (12 nodes ) were split into 12
> tetrahedra). ( 2 processors in total ) Each processor handles 9 nodes
> separately. When I checked the local to global mapping,  It seems the
> mapping is not right.
> For example, the mapping array is
>  Local Global (left: local node number; right: global node number).
> provided.
>  0  0
>  1 3
>  2 5
>  3 1
>  4 6
>  5 8
>  6 2
>  7 9
>  8 11
>   But the coordinates between the local (left) and global nodes (right)
> are not the same, meaning they are not matching. Did I miss something?
>

Plex reorders the nodes when it partitions the mesh. Add

  DMViewFromOptions(dm, NULL, "-dm_view");

to your code and

  -dm_view ::ascii_info_detail

to the command line. Then it will print out a mesh description that shows
you the coordinates
of the vertices, and you can match them up.

  Thanks,

     Matt


> In addition, the simple 3D gmsh file has been  attached if you need.
> *Gmsh file:*
> $MeshFormat
> 4.1 0 8
> $EndMeshFormat
> $PhysicalNames
> 3
> 2 27 "Excit"
> 2 28 "PEC"
> 3 29 "volume"
> $EndPhysicalNames
> $Entities
> 8 12 6 1
> 1 0 0 0 0
> 2 0 0.25 0 0
> 3 0 0.25 0.25 0
> 4 0 0 0.25 0
> 5 1 0 0 0
> 6 1 0.25 0 0
> 10 1 0.25 0.25 0
> 14 1 0 0.25 0
> 1 0 0 0 0 0.25 0 0 2 1 -2
> 2 0 0.25 0 0 0.25 0.25 0 2 2 -3
> 3 0 0 0.25 0 0.25 0.25 0 2 3 -4
> 4 0 0 0 0 0 0.25 0 2 4 -1
> 6 1 0 0 1 0.25 0 0 2 5 -6
> 7 1 0.25 0 1 0.25 0.25 0 2 6 -10
> 8 1 0 0.25 1 0.25 0.25 0 2 10 -14
> 9 1 0 0 1 0 0.25 0 2 14 -5
> 11 0 0 0 1 0 0 0 2 1 -5
> 12 0 0.25 0 1 0.25 0 0 2 2 -6
> 16 0 0.25 0.25 1 0.25 0.25 0 2 3 -10
> 20 0 0 0.25 1 0 0.25 0 2 4 -14
> 1 0 0 0 0 0.25 0.25 1 27 4 1 2 3 4
> 13 0 0 0 1 0.25 0 1 28 4 1 12 -6 -11
> 17 0 0.25 0 1 0.25 0.25 0 4 2 16 -7 -12
> 21 0 0 0.25 1 0.25 0.25 1 28 4 3 20 -8 -16
> 25 0 0 0 1 0 0.25 0 4 4 11 -9 -20
> 26 1 0 0 1 0.25 0.25 1 28 4 6 7 8 9
> 1 0 0 0 1 0.25 0.25 1 29 6 -1 26 13 17 21 25
> $EndEntities
> $Nodes
> 17 12 1 12
> 0 1 0 1
> 1
> 0 0 0
> 0 2 0 1
> 2
> 0 0.25 0
> 0 3 0 1
> 3
> 0 0.25 0.25
> 0 4 0 1
> 4
> 0 0 0.25
> 0 5 0 1
> 5
> 1 0 0
> 0 6 0 1
> 6
> 1 0.25 0
> 0 10 0 1
> 7
> 1 0.25 0.25
> 0 14 0 1
> 8
> 1 0 0.25
> 1 11 0 1
> 9
> 0.5 0 0
> 1 12 0 1
> 10
> 0.5 0.25 0
> 1 16 0 1
> 11
> 0.5 0.25 0.25
> 1 20 0 1
> 12
> 0.5 0 0.25
> 2 1 0 0
> 2 13 0 0
> 2 21 0 0
> 2 26 0 0
> 3 1 0 0
> $EndNodes
> $Elements
> 5 24 1 24
> 2 1 2 2
> 1 1 2 4
> 2 4 2 3
> 2 13 2 4
> 3 9 1 2
> 4 9 2 10
> 5 5 9 10
> 6 5 10 6
> 2 21 2 4
> 7 11 3 4
> 8 11 4 12
> 9 7 11 12
> 10 7 12 8
> 2 26 2 2
> 11 5 6 8
> 12 8 6 7
> 3 1 4 12
> 13 1 2 4 12
> 14 10 9 12 2
> 15 2 9 12 1
> 16 9 10 12 8
> 17 6 5 8 10
> 18 10 5 8 9
> 19 4 2 3 11
> 20 10 12 11 2
> 21 2 12 11 4
> 22 12 10 11 7
> 23 6 8 7 10
> 24 10 8 7 12
> $EndElements
>
> On Wed, May 17, 2023 at 7:30 PM Matthew Knepley <knepley at gmail.com> wrote:
>
>> On Wed, May 17, 2023 at 6:58 PM neil liu <liufield at gmail.com> wrote:
>>
>>> Dear Petsc developers,
>>>
>>> I am writing my own code to calculate the FEM matrix. The following is
>>> my general framework,
>>>
>>> DMPlexCreateGmsh();
>>> MPI_Comm_rank (Petsc_comm_world, &rank);
>>> DMPlexDistribute (.., .., &dmDist);
>>>
>>> dm = dmDist;
>>> //This can create separate dm s for different processors. (reordering.)
>>>
>>> MatCreate (Petsc_comm_world, &A)
>>> // Loop over every tetrahedral element to calculate the local matrix for
>>> each processor. Then we can get a local matrix A for each processor.
>>>
>>> *My question is : it seems we should build a global matrix B (assemble
>>> all the As for each partition) and then transfer B to KSP. KSP will do the
>>> parallelization correctly, right? *
>>>
>>
>> I would not suggest this. The more common strategy is to assemble each
>> element matrix directly into the
>> global matrix B, by mapping the cell indices directly to global indices
>> (rather than to local indices in the matrix A). You can do this in two
>> stages. You can create a LocalToGlobalMapping in PETSc that maps
>> every local index to a global index. Then you can assemble into B exactly
>> as you would assemble into A by calling MatSetValuesLocal().
>>
>> DMPlex handles these mappings for you automatically, but I realize that
>> it is a large number of things to buy into.
>>
>>   Thanks,
>>
>>      Matt
>>
>>
>>> If that is right, I should define a whole domain matrix B before the
>>> partitioning (MatCreate (Petsc_comm_world, &B); ), and then use
>>> localtoglobal (which petsc function should I use? Do you have any
>>> examples.) map to add A to B at the right positions (MatSetValues) ?
>>>
>>> Does that make sense?
>>>
>>> Thanks,
>>>
>>> Xiaodong
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>> --
>> What most experimenters take for granted before they begin their
>> experiments is infinitely more interesting than any results to which their
>> experiments lead.
>> -- Norbert Wiener
>>
>> https://www.cse.buffalo.edu/~knepley/
>> <http://www.cse.buffalo.edu/~knepley/>
>>
>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230518/1d6e3ee1/attachment.html>


More information about the petsc-users mailing list