[petsc-users] Using dmplexdistribute do parallel FEM code.

neil liu liufield at gmail.com
Wed May 17 17:58:20 CDT 2023


Dear Petsc developers,

I am writing my own code to calculate the FEM matrix. The following is my
general framework,

DMPlexCreateGmsh();
MPI_Comm_rank (Petsc_comm_world, &rank);
DMPlexDistribute (.., .., &dmDist);

dm = dmDist;
//This can create separate dm s for different processors. (reordering.)

MatCreate (Petsc_comm_world, &A)
// Loop over every tetrahedral element to calculate the local matrix for
each processor. Then we can get a local matrix A for each processor.

*My question is : it seems we should build a global matrix B (assemble all
the As for each partition) and then transfer B to KSP. KSP will do the
parallelization correctly, right? *

If that is right, I should define a whole domain matrix B before the
partitioning (MatCreate (Petsc_comm_world, &B); ), and then use
localtoglobal (which petsc function should I use? Do you have any
examples.) map to add A to B at the right positions (MatSetValues) ?

Does that make sense?

Thanks,

Xiaodong
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20230517/0f9b78ab/attachment-0001.html>


More information about the petsc-users mailing list