[petsc-dev] Mat/DM dependency

Dmitry Karpeev karpeev at mcs.anl.gov
Tue Jan 5 14:24:36 CST 2010


On Tue, Jan 5, 2010 at 11:42 AM, Jed Brown <jed at 59a2.org> wrote:
> On Tue, 5 Jan 2010 10:52:36 -0600, Dmitry Karpeev <karpeev at mcs.anl.gov> wrote:
>> Automating decomposition/assembly of operators relative to a
>> decomposition of identity (doi) would, in my opinion, go a long way
>> towards decoupling Mat from DM.  DM would then convert their geometric
>> data into a corresponding doi and Mat wouldn't need to know about DMs
>> directly.
>
> I like keeping Mat purely algebraic.  When assembly has nontrivial
> semantics, I write a MyDMMatSetValuesBlockedWhatever(MyDM,Mat,...) so
> that Mat doesn't see DM (since the DM is explicitly available when this
> function is called, I prefer using it explicitly over patching the Mat
> to retain a reference to the DM).
>
>> That's what I'm trying to do with MatFwk -- a framework for general
>> block decompositions of matrices
>
> What granularity are you targeting with MatFwk?  I think it could be
> very difficult to have good performance for very fine grains (e.g.
> unassembled P1 elements).

I definitely see that problem.
My current view is that MatFwk could operate in two modes: (1)
unassembled and (2) assembled.
The assembled mode (below) may be better suited for fine granularity
decompositions.  Even then
it might be challenging to make it perform well.

1) In the unassembled mode MatFwk would hold relatively few Mat
objects {M_k}, implementing the matrix blocks
as well as the Mat objects implementing the corresponding
projection/injection operators {P_k, R_k}
(P_k and R_k can frequently, but not always, implemented with
VecScatters wrapped as Mats).
MatFwk would then call these operators to effect a MatMult operation:
Mu = \sum_k R_k M_k P_k u.
In a sense, this will unify various other ways to compose matrices in
PETSc (MatComposite, etc).
There would definitely be overhead associated with the internally-held
Mat objects and the additional
Vec storage for the intermediate results.  This is nothing new,
however, relative to MatComposite,
for example.   As long as the granularity is relatively coarse (few
matrix blocks), the performance penalty is
probably acceptable.

Such unassembled matrices would definitely not be usable with many (if
not all) of the standard preconditioners.
However, this would allow for almost any Mat type to be used to
implement the matrix blocks, including using
MatFwk recursively.  Matrix blocks could be extracted and manipulated
(as long as the layout stays to same) to
set their values, etc.

MatFwk, in a sense, would define the "block sparsity" or "block
connectivity" structure for blocks of Vec dofs:
for each k, the "input" block consists of those dofs that span the
preimage of P_k (loosely speaking, but it can
be made precise).  Likewise, the "output" block will be the dofs that
span the image of R_k.  This way MatFwk
is akin to an aij structure for blocks of dofs.


2) In the assembled mode, the matrix blocks would be defined only
conceptually.  The dof block sparsity structure still needs
to be defined as well as the ways to inject blocks of "local" dofs
into the larger Vec and vice versa (i.e., R_ks and P_ks),
but there would be no per block overhead in the form of separate
objects implementing each M_k, P_k, R_k triple.
We still need the {P_k, R_k}, but they could be stored in a single
structure, amortizing the overhead.  Then the corresponding
M_ks can be manipulated by setting their values, which are then
injected into M using the R_ks.  The advantage is that
the user need only know how to form these local values, which are then
correctly accumulated within M.
This is the model that FEM uses.

In fact, PETSc already provides for this via MatSetValues, except (a)
all R_ks have the form of ADD_VALUES and (b)
the storage of the mapping from local to global degrees of freedom is
the responsibility of the user (the block index arrays supplied
to MatSetValues).  So an assembled MatFwk would store these block
index arrays internally (and, possibly, R_ks matrix entries,
if we need more than just ADD_VALUES, as is the case for BDDC, mortar
elements, etc) implying overhead.
HOWEVER, this information is already stored as mesh data:
element-to-vertex lists and vertex coordinates, in the P1 case.
Distilling it to P_ks/R_ks would only (i) make it algebraic, (ii) make
it internal to MatFwk as a resolution of identity.
This is definitely a solver-centric viewpoint, which subordinates the
mesh/geometry to Mat, but without making references to
the corresponding software abstractions at the expense of algebrizing them.

DM implementations could then be built on top of MatFwk, and they
would be responsible for converting geometry
to algebra (i.e., connectivity and continuity into
projection/injection operators).

Dmitry.

>
> Jed
>



More information about the petsc-dev mailing list