[petsc-dev] Handling ADD_VALUES with DMComposite

Dmitry Karpeev karpeev at mcs.anl.gov
Wed Nov 17 11:46:51 CST 2010


Most of what you are describing is supposed to be supported by MatDD and VecDD.
In particular, an "assembled" form of MatDD would store everything in
a single matrix
(of any type supporting MatSetValues) and translate patch indices to
global indices.
As soon as I've debugged GASM, I can reuse a lot of its guts for MatDD
(not to mention
having had a lot of impl-level ideas clarified working on GASM).

Dmitry.

On Wed, Nov 17, 2010 at 9:26 AM, Jed Brown <jed at 59a2.org> wrote:
> On Tue, Nov 16, 2010 at 21:14, Jed Brown <jed at 59a2.org> wrote:
>>
>> discuss matrix insertion in the multi-physics context.
>
> I just had a chat with Dave about this, and I realize that some of my early
> ideas are crappy for block formats.  Suppose I have a matrix logically
> partitioned as J = [A B; C D].  I would like to either (1) hold all of J as
> MPIAIJ or (2) hold J as MatBlock (need an MPI implementation, maybe that is
> MatDD, maybe it is based on Dave's implementation in PETSc-Ext) where A and
> D are (S)BAIJ.  The case of A and D on sub-communicators is interesting, but
> should be possible eventually.
> So what if I could get lightweight references to the blocks with an API like
> MatGetSubMatrices.  In the MatBlock case, it would just give me references
> to the pieces, but in the AIJ case, it would give me a Mat of a new type
> that only implements MatSetValues (and variants).  This implementation would
> end up just calling MatSetValues(J,...) with fixed up indices.
> The only downside of this is that the user would have to make a separate
> MatSetValues call for each part, which is no performance penalty when J is
> MatBlock, but is somewhat inefficient when J is AIJ.  I think this is an
> acceptable penalty since it allows BlockMat pieces of A to be stored in
> better formats (S)BAIJ.  It also exposes some concurrency if assembly was
> parallelized with threads (in which case MatSetValues needs to hold locks).
>
> In a similar direction, I think that DMComposite's subvector monkeying (via
> VecPlaceArray) could be replaced by a Vec API to get pieces.  This could do
> VecPlaceArray for PETSc's current Vec types, but it would return the
> (no-copy) pieces for a VecBlock.  This would permit the pieces to have type
> VecGhost, for example, thus allowing no-copy from global multi-physics
> vector to ghosted single-physics vectors.  Of course users could not use
> VecBlock with MPIAIJ (or it would need an extra copy), but this would seem
> to permit a fast implementation in both the fully assembled and heavily
> split cases.
> Jed



More information about the petsc-dev mailing list