[petsc-dev] API changes in MatIS

Jed Brown jedbrown at mcs.anl.gov
Tue May 22 17:27:31 CDT 2012


On Mon, May 21, 2012 at 3:45 PM, Stefano Zampini
<stefano.zampini at gmail.com>wrote:

> Yes. In the rectangular case you will need both global_to_local and
> local_to global scatters. The  square case is a special case for which they
> are the same (except that they are performed in SCATTER_FORWARD or
> SCATTER_REVERSE mode). With rectangular matrices, you can perform MatMult
> using the SCATTER_FORWARD mode for the two different scatters;
> MatMultTranspose using  SCATTER REVERSEs.
>

Yes, scatter is fine. An ISGlobalToLocalMapping (allowing you to translate
global indices into local indices) is not.


>
> The Dohrmann's approach for solving the local saddle point problems is
> strictly related on how the are currently solved in PCBDDC. I'm planning to
> add a new feature in PCBDDC which can give the possibility to factor and
> solve the local saddle point problems directly (factoring the whole local
> saddle point problems). I used the change of basis approach in the past,
> but I'm more inclined to adopt the saddle point approach, because you can
> (almost) easily switch inexact subdomain solvers. However, I think the
> change of basis can also be added to the current method with minor code
> changes.
>

Okay. Presumably the change of basis can be done with a MatPtAP if
desirable.


> Actually in PCBDDC the local coarse matrix is computed using a theoretical
> equivalence of the PtAP operation of coarse basis and the unassembled MATIS
> matrix (coarse basis are continuous only at vertex and constraints dofs).
> PtAP (where P is dense) is just avoided for its computational costs.
>

Are you basically just doing a local PtAP or do you use the equivalence

K \Psi = C^T \Lambda   (notation of Dohrmann's Eq 2)

or something else?


> I'm not familiar with multigrid, but if you can implement an MG framework
> for the application of a nonoverlapping (or strip) preconditioner it would
> be great. I can help you with the Neumann-Neumann and BDDC cases.
>

Cool, we can definitely do this.

The software reuse is somewhat delicate when asked to also support
iterating in the reduced space, but I don't think that's as big of a deal
in 3D (the reduced space isn't massively smaller than the full space unless
you have huge subdomains, which cause their own set of problems including
limited suitability of direct solvers).


> I think it is a problem. All MATIS code is inherenlty written for
> uniprocessor subdomains and one subdomain per process. Maybe it would be
> better to think at a brand new class of matrices (your PA I think) for
> which construct the new class and subclasses of preconditioners. Maybe such
> a new class can be implemented with SF instead of VecScatter.
>

That sounds fine to me, hopefully we can unify later.


>  PCISCreateRestrictionAndProlongation_BDDC: (in case of exact solvers for
>>> the Dirichlet problems) default P will be of size
>>> n_coarse_dofs*sum^N_{i=1}pcis->n_B with local matrices of P the actual
>>> pcbddc->coarse_phi_B
>>>
>>> PCISCreateRestrictionAndProlongation_BDDC: (in case of inexact solvers
>>> for the Dirichlet problems) default P will be of size
>>> n_coarse_dofs*sum^N_{i=1}pcis->n with local matrices of P the actual
>>> pcbddc->coarse_phi_B concatenated with pcbddc->coarse_phi_D
>>> (pcis->n=pcis->n_B+pcis->n_D)
>>>
>>
>> This sounds okay. The reduced space iteration with exact Dirichlet
>> solvers bothers me somewhat. We should be able to implement as running
>> PCFieldSplit to restrict the inner iteration to the interface problem, but
>> with our current data structures, that may have thrown away the interior
>> information that we need.
>>
>
> I'm not getting the point here. Can you clarify?
>

So when exact subdomain solves are used, people ask for the interior dofs
to be eliminated and the Krylov space is constructed in the interface
space. This means that each application of the operator involves solving a
subdomain Dirichlet problem. This is similar to standard FETI-DP where the
iteration takes place in the space of Lagrange multipliers and the operator
solves the local pinned Neumann problems and coarse grid correction.
Working in the reduced space requires exact solves, but reduces the size of
the vectors in the Krylov space (just the length of the vectors, it's
irrelevant for convergence rate).
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120522/3cf4e6b5/attachment.html>


More information about the petsc-dev mailing list