[petsc-users] Using BDDC preconditioner for assembled matrices

Abdullah Ali Sivas abdullahasivas at gmail.com
Wed Oct 24 12:59:23 CDT 2018


Hi Stefano,

I am trying to solve the div-div problem (or grad-div problem in strong
form) with a H(div)-conforming FEM. I am getting the matrices from an
external source (to be clear, from an ngsolve script) and I am not sure if
it is possible to get a MATIS matrix out of that. So I am just treating it
as if I am not able to access the assembly code. The results are 2, 31, 26,
27, 31 iterations, respectively, for matrix sizes 282, 1095, 4314, 17133,
67242, 267549. However, norm of the residual also grows significantly;
7.38369e-09 for 1095 and 5.63828e-07 for 267549. I can try larger sizes, or
maybe this is expected for this case.

As a side question, if we are dividing the domain into number of MPI
processes subdomains, does it mean that convergence is affected negatively
by the increasing number of processes? I know that alternating Schwarz
method and some other domain decomposition methods sometimes suffer from
the decreasing radius of the subdomains. It sounds like BDDC is pretty
similar to those by your description.

Best wishes,
Abdullah Ali Sivas

On Wed, 24 Oct 2018 at 05:28, Stefano Zampini <stefano.zampini at gmail.com>
wrote:

> Abdullah,
>
> The "Neumann" problems Jed is referring to result from assembling your
> problem on each subdomain ( = MPI process) separately.
> Assuming you are using FEM, these problems have been historically  named
> "Neumann" as they correspond to a problem with natural boundary conditions
> (Neumann bc for Poisson).
> Note that in PETSc the subdomain decomposition is associated with the mesh
> decomposition.
>
> When converting from an assembled AIJ matrix to a MATIS format, such
> "Neumann" information is lost.
> You can disassemble an AIJ matrix, in the sense that you can find local
> matrices A_j such that A = \sum_j R^T_j A_j R_j (as it is done in ex72.c),
> but you cannot guarantee (unless if you solve an optimization problem) that
> the disassembling will produce subdomain Neumann problems that are
> consistent with your FEM problem.
>
> I have added such disassembling code a few months ago, just to have
> another alternative for preconditioning AIJ matrices in PETSc; there are
> few tweaks one can do to improve the quality of the disassembling, but I
> discourage its usage unless you don't have access to the FEM assembly code.
>
> With that said, what problem are you trying to solve? Are you using DMDA
> or DMPlex? What are the results you obtained with using the automatic
> disassembling?
>
> Il giorno mer 24 ott 2018 alle ore 08:14 Abdullah Ali Sivas <
> abdullahasivas at gmail.com> ha scritto:
>
>> Hi Jed,
>>
>> Thanks for your reply. The assembled matrix I have corresponds to the
>> full problem on the full mesh. There are no "Neumann" problems (or any sort
>> of domain decomposition) defined in the code generates the matrix. However,
>> I think assembling the full problem is equivalent to implicitly assembling
>> the "Neumann" problems, since the system can be partitioned as;
>>
>> [A_{LL} | A_{LI}]  [u_L]     [F]
>> -----------|------------ -------- = -----
>> [A_{IL}  |A_{II} ]   [u_I]      [G]
>>
>> and G should correspond to the Neumann problem. I might be thinking wrong
>> (or maybe I completely misunderstood the idea), if so please correct me.
>> But I think that the problem is that I am not explicitly telling PCBDDC
>> which dofs are interface dofs.
>>
>> Regards,
>> Abdullah Ali Sivas
>>
>> On Tue, 23 Oct 2018 at 23:16, Jed Brown <jed at jedbrown.org> wrote:
>>
>>> Did you assemble "Neumann" problems that are compatible with your
>>> definition of interior/interface degrees of freedom?
>>>
>>> Abdullah Ali Sivas <abdullahasivas at gmail.com> writes:
>>>
>>> > Dear all,
>>> >
>>> > I have a series of linear systems coming from a PDE for which BDDC is
>>> an
>>> > optimal preconditioner. These linear systems are assembled and I read
>>> them
>>> > from a file, then convert into MATIS as required (as in
>>> >
>>> https://www.mcs.anl.gov/petsc/petsc-current/src/ksp/ksp/examples/tutorials/ex72.c.html
>>> > ). I expect each of the systems converge to the solution in almost same
>>> > number of iterations but I don't observe it. I think it is because I
>>> do not
>>> > provide enough information to the preconditioner. I can get a list of
>>> inner
>>> > dofs and interface dofs. However, I do not know how to use them. Has
>>> anyone
>>> > have any insights about it or done something similar?
>>> >
>>> > Best wishes,
>>> > Abdullah Ali Sivas
>>>
>>
>
> --
> Stefano
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181024/64933a44/attachment.html>


More information about the petsc-users mailing list