Mixed finite element discretization with petsc?

Thomas Witkowski thomas.witkowski at tu-dresden.de
Thu Oct 8 07:20:41 CDT 2009


Jed Brown wrote:
> On Mon, Oct 5, 2009 at 11:09, Thomas Witkowski
> <thomas.witkowski at tu-dresden.de> wrote:
>
>   
>> Okay, I thought a little bit to complicated :) So it's clear for me that
>> there is no problem in defining each row of the overall system to one
>> processor. But how to make continues row indices?
>>
>> In my sequential code I'm assembling the matrices block wise, so the overall
>> matrix looks as follows:
>>
>> {A & B \\ B^T & 0} * {u & p) = {0 & 0}
>>     
>
> This representation is purely symbolic.  There is a permutation of the
> dofs that would look like this, but it's not the ordering that you
> actually want to use
>   
okay, but with a different ordering I'll lose symmetry of the system.
>   
>> So when I've partitioned my mesh, and say I've 1000 nodes in the mesh, the
>> first row and the row 1001 are owned by the same process, because they come
>> from the discretization of the same node. So is it right to bring all these
>> matrix rows together by using different row indices?
>>     
>
> 1. Partition the mesh and resolve ownership
> 2. Each process counts the number of owned dofs for velocity + pressure.
> 3. MPI_Scan so that every process knows it's starting offset.
> 4. Each process numbers owned dofs starting with this offset.
> 5. Scatter this global numbering so that every process knows the
> global index of its unowned interface dofs
> 6. Preallocate matrix
>   
Most of the points are already done in my fem code (it's a general fem 
library, and I've solved complicated
higher order pdes with up to 512 cores using petsc), but it's not 
generalized to mixed finite elements.
> Note that some PETSc calls, such as creating a vector, do the MPI_Scan
> logic for you.
>
> The procedure above will work fine with a direct solver, but most
> preconditioners do a terrible job if you just give them an assembled
> indefinite matrix.  To do better, you may want to explore Schur
> complement or special domain decomposition preconditioners.  The
> former can be done using PCFieldSplit, but usually you need some
> physical insight to precondition the Schur complement.  These PCs
> sometimes prefer a different matrix representation, so if you have a
> good idea of what will work for your problem, let us know and we can
> give suggestions.
>   
Yes, preconditioning is really a problem at the moment. I see the 
effect, that with increasing domain
size the number of iterations is also growing. I think, it's only 
possible to solve the problem with an
pde specific preconditioner eliminating the element size from the 
spectrum of the linear system.

Thanks alot to make some things more clear to me!

Thomas


More information about the petsc-users mailing list