Mixed finite element discretization with petsc?
Jed Brown
jed at 59A2.org
Mon Oct 5 04:49:39 CDT 2009
On Mon, Oct 5, 2009 at 11:09, Thomas Witkowski
<thomas.witkowski at tu-dresden.de> wrote:
> Okay, I thought a little bit to complicated :) So it's clear for me that
> there is no problem in defining each row of the overall system to one
> processor. But how to make continues row indices?
>
> In my sequential code I'm assembling the matrices block wise, so the overall
> matrix looks as follows:
>
> {A & B \\ B^T & 0} * {u & p) = {0 & 0}
This representation is purely symbolic. There is a permutation of the
dofs that would look like this, but it's not the ordering that you
actually want to use
> So when I've partitioned my mesh, and say I've 1000 nodes in the mesh, the
> first row and the row 1001 are owned by the same process, because they come
> from the discretization of the same node. So is it right to bring all these
> matrix rows together by using different row indices?
1. Partition the mesh and resolve ownership
2. Each process counts the number of owned dofs for velocity + pressure.
3. MPI_Scan so that every process knows it's starting offset.
4. Each process numbers owned dofs starting with this offset.
5. Scatter this global numbering so that every process knows the
global index of its unowned interface dofs
6. Preallocate matrix
Note that some PETSc calls, such as creating a vector, do the MPI_Scan
logic for you.
The procedure above will work fine with a direct solver, but most
preconditioners do a terrible job if you just give them an assembled
indefinite matrix. To do better, you may want to explore Schur
complement or special domain decomposition preconditioners. The
former can be done using PCFieldSplit, but usually you need some
physical insight to precondition the Schur complement. These PCs
sometimes prefer a different matrix representation, so if you have a
good idea of what will work for your problem, let us know and we can
give suggestions.
Jed
More information about the petsc-users
mailing list