[petsc-dev] PCASM uses DMCreateDecomposition which splits field, NOT expected behavior

Dmitry Karpeev karpeev at mcs.anl.gov
Sat May 26 20:04:46 CDT 2012


On Saturday, May 26, 2012, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> Dmitry, I have problems with this commit.
> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/440ec9350d51
>
> If I try running -pc_type asm with a vector problem defined on a DM,
PCASM calls DMCreateDecomposition which splits fields on each process. This
is of very limited value since usually every field is coupled to every
other, so after increasing the overlap by one, you end up solving the
entire subdomain twice. However, at this point, you are not using overlap
so instead, we get subdomain and field blocked Jacobi. (Equivalent to
BJacobi with additive fieldsplit inside.) This is certainly unexpected
behavior.
> It also crashes with Seq(S)BAIJ because MatGetSubMatrices_SeqBAIJ can
only extract when the block size matches.
> And it does the wrong thing when used with MPI(S)BAIJ because
MatGetSubMatrices_MPIBAIJ calls ISCompressIndicesGeneral which silently
"fills in" the blocks, leading to a Mat and Vec size mismatch. Hong, how
should we enforce that the original index set actually addresses blocks?
Why is it not okay to sort? (Don't we rely on those indices being sorted
somewhere else anyway?)

The problem is that this API mixes field splits and (spatial) domain
decompositions.
Initially I thought the two kinds could be served up via the same API
because they use the same syntax, but the semantics are different. In
particular,  domain decompositions don't change the block size, while field
splits typically do. Then it's usually downwards -- when splitting up a
block of fields pointwise, but sometimes upwards -- in a primal-dual split
of a KKT system the primal subsystem might have a higher natural block size
than the full system.

I think the solution is to split up the API: DMCreateFieldSplit() and
DMCreateDomainDecomposition(). I'll look into it later tonight, when I get
to my computer.

> Our test coverage must be pretty sucky if we went two months with ASM
this broken for systems.
>
> As a lower priority, -pc_asm_decomposition should provide a list of valid
decompositions rather than be a magic string. Also, this form cannot be
called outside of a PetscOptionsBegin/End pair, otherwise it doesn't print
-help output. It should be moved to PCSetUp_ASM.
> ierr = PetscOptionsString("-pc_asm_decomposition", "Name of the DM
defining the composition", "PCSetDM", ddm_name, ddm_name,1024,&flg);
CHKERRQ(ierr);

The problem here is that the PC object doesn't really know what
decompositions are available -- the DM does. We could have
DMGetSubdomainDecompositionNames() advertise what'd available. Another
problem is that what decompositions make sense is DM-impl-dependent.  It is
more stark for fieldsplits: I have a fork of libMesh that can generate
decompositions based on the system's variables, and essentially any
groupings of variable names are valid splits ( e.g., v1,v2;v3,v4. I think
it makes sense to maintain that flexibility). Because of that I think the
best help line would only exhibit the syntax, rather than enumerate all
possible splits. Then DMSetUp_Xxx should handle that.

Dmitry.
>
>
> I have disabled that code for now, but we have to decide whether it's
conceptually right (considering the effect on overlap) and if so, how to
fix it.
> http://petsc.cs.iit.edu/petsc/petsc-dev/rev/d9a0e3f9dacd
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120526/36ebe3d1/attachment.html>


More information about the petsc-dev mailing list