[petsc-dev] PCASM uses DMCreateDecomposition which splits field, NOT expected behavior
Dmitry Karpeev
karpeev at mcs.anl.gov
Sat May 26 22:24:15 CDT 2012
On Sat, May 26, 2012 at 9:07 PM, Jed Brown <jedbrown at mcs.anl.gov> wrote:
> On Sat, May 26, 2012 at 8:04 PM, Dmitry Karpeev <karpeev at mcs.anl.gov>wrote:
>
>>
>>
>> On Saturday, May 26, 2012, Jed Brown <jedbrown at mcs.anl.gov> wrote:
>> > Dmitry, I have problems with this commit.
>> > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/440ec9350d51
>> >
>> > If I try running -pc_type asm with a vector problem defined on a DM,
>> PCASM calls DMCreateDecomposition which splits fields on each process. This
>> is of very limited value since usually every field is coupled to every
>> other, so after increasing the overlap by one, you end up solving the
>> entire subdomain twice. However, at this point, you are not using overlap
>> so instead, we get subdomain and field blocked Jacobi. (Equivalent to
>> BJacobi with additive fieldsplit inside.) This is certainly unexpected
>> behavior.
>> > It also crashes with Seq(S)BAIJ because MatGetSubMatrices_SeqBAIJ can
>> only extract when the block size matches.
>> > And it does the wrong thing when used with MPI(S)BAIJ because
>> MatGetSubMatrices_MPIBAIJ calls ISCompressIndicesGeneral which silently
>> "fills in" the blocks, leading to a Mat and Vec size mismatch. Hong, how
>> should we enforce that the original index set actually addresses blocks?
>> Why is it not okay to sort? (Don't we rely on those indices being sorted
>> somewhere else anyway?)
>>
>> The problem is that this API mixes field splits and (spatial) domain
>> decompositions.
>> Initially I thought the two kinds could be served up via the same API
>> because they use the same syntax, but the semantics are different. In
>> particular, domain decompositions don't change the block size, while field
>> splits typically do. Then it's usually downwards -- when splitting up a
>> block of fields pointwise, but sometimes upwards -- in a primal-dual split
>> of a KKT system the primal subsystem might have a higher natural block size
>> than the full system.
>>
>> I think the solution is to split up the API: DMCreateFieldSplit() and
>> DMCreateDomainDecomposition(). I'll look into it later tonight, when I get
>> to my computer.
>
>
> If the arguments are definitely going to be the same, there could be an
> enum to designate which form is being requested.
>
I think that might make it more cumbersome for impls to provide only DD or
only FS decompositions.
>
>
> What are we going to do now? Leave the code there deactivated like it is
> now for the release and discuss later?
>
I want to fix and reactivate the code for the release so I can use it from
libMesh.
I'll push roughly this fix by tomorrow: DMCreateDecomposition(DM)()-->
DMCreateFieldSplit(DM)()/DMCreateDomainDecomposition(DM)() used,
respectively, by PCFIELDSPLIT/PC(G)ASM.
That should address the problems you were encountering -- unintended
forwarding of field splits to PCASM.
>
>
>>
>>
>> > Our test coverage must be pretty sucky if we went two months with ASM
>> this broken for systems.
>> >
>> > As a lower priority, -pc_asm_decomposition should provide a list of
>> valid decompositions rather than be a magic string. Also, this form cannot
>> be called outside of a PetscOptionsBegin/End pair, otherwise it doesn't
>> print -help output. It should be moved to PCSetUp_ASM.
>> > ierr = PetscOptionsString("-pc_asm_decomposition", "Name of the DM
>> defining the composition", "PCSetDM", ddm_name, ddm_name,1024,&flg);
>> CHKERRQ(ierr);
>>
>> The problem here is that the PC object doesn't really know what
>> decompositions are available -- the DM does. We could have
>> DMGetSubdomainDecompositionNames() advertise what'd available. Another
>> problem is that what decompositions make sense is DM-impl-dependent. It is
>> more stark for fieldsplits: I have a fork of libMesh that can generate
>> decompositions based on the system's variables, and essentially any
>> groupings of variable names are valid splits ( e.g., v1,v2;v3,v4. I think
>> it makes sense to maintain that flexibility).
>
>
> Cool, what about deeper nestings like [v1,[[v2,v3],v4]].
>
Nested splits are to be handled recursively by the individual splits' DMs.
And each split should also be domain-decomposable, so that
geometrically-defined subdomains (e.g., Exodus II blocks) can be used with
PC(G)ASM.
I am testing that functionality in libMesh/Moose right now.
>
>
>> Because of that I think the best help line would only exhibit the syntax,
>> rather than enumerate all possible splits.
>
>
> How about having a DM-specific function for that designation?
>
DMXXXGetFieldSplits()/DMXXXGetDomainDecompositions(), for example?
I think that would work, if the implementation defined only a small number
of decompositions.
If instead there is a short grammar defining well-formed strings naming
such decompositions, I'm not sure how to
communicate that to the user, other than in documentation. I have
DMLibMeshGetVars() to return all vars. FieldSplits
can be defined as arbitrary groupings of those. Each split will then
advertise only the vars in the groupings that defines the split.
Dmitry.
>
>
>> Then DMSetUp_Xxx should handle that.
>>
>> Dmitry.
>>
>> >
>> >
>> > I have disabled that code for now, but we have to decide whether it's
>> conceptually right (considering the effect on overlap) and if so, how to
>> fix it.
>> > http://petsc.cs.iit.edu/petsc/petsc-dev/rev/d9a0e3f9dacd
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120526/423e1f8c/attachment.html>
More information about the petsc-dev
mailing list