[petsc-dev] API changes in MatIS

Stefano Zampini stefano.zampini at gmail.com
Tue Jun 5 04:40:23 CDT 2012


Hi Jed, sorry for late reply, I was very busy


2012/5/26 Jed Brown <jedbrown at mcs.anl.gov>

> On Sat, May 26, 2012 at 9:39 AM, Stefano Zampini <
> stefano.zampini at gmail.com> wrote:
>
>> One we have the constraint matrix, we can easily obtain the change of
>> basis matrix T (as in Klawonn-Widlund papers).
>> Note that the change of basis approach will be very effective for exact
>> applications with reduced iterations. I think we should include in the new
>> matrix class the possibility of doing iterations on the reduced space
>> instead of the whole space of dofs.
>>
>
> Indeed. You might be aware of the "second approach" described here (CPAM
> 2006)
>
> https://ftp.cs.nyu.edu/web/Research/TechReports/TR2004-855/TR2004-855.pdf
>
> I think it's worth testing, but since the number of integral Lagrange
> multipliers is reasonably small, I would not a priori be concerned by
> solving the saddle point problem in which they are ordered last. If there
> are enough vertex constraints to keep the subdomains nonsingular, you can
> easily handle the problem manually (as Dohrmann does), but I wouldn't
> expect modern LDL^T packages to be slowed greatly by a handful of Lagrange
> multipliers and you want it to reorder anyway to reduce fill during the
> factorization.
>

As I understood well, "second approach" is similar to dohrmann's except for
a further reordering which should reduce the computational costs of
factorizations? Such approach sounds to me over-complicated. I'm more
inclined to solve directly the local saddle point problems (either with a
change of basis or not). Are you planning to add interfaces to LDL^T
solvers to PETSc?


>
>
>> You said you would have the new matrix class to support either more
>>> subdomains per core, or more cores per subdomain. In the latter case,
>>> threaded or mpi matrices (on subcomms)?
>>>
>>
> I would plan to make it support any combination.
>
>
It would be great. I'm wondering how you will accomplish the
communications. I think there should be an intermediate step, before
multiplying by each subdomain matrix owned by the proc (either multiproc or
sequential), where you need to scatter values from the global vector to a
"local" vector containing all the dofs (in the case of more subdomains)
using some information generated by analyzing what the user has passed in
when requesting matrix creation. Then a number of additional scatters to
realize each single "subdomain" multiplication. How do you do this with
mutliprocessors subdomains? PETSc does support scatters between two
different comms (with size > 1)? I think that mixing multisubdomain procs
and multiproc subdomain will be nasty to get it work but definitely an
interesting challenge.


> Tell me if you think it's a bad idea, but my thought was to pre-select a
> few coarse vertices (ideally enough to prevent floating subdomains, but the
> method should be able to tolerate if not). Then the form that gets
> assembled due to normal MatSetValuesLocal() would have nonsingular
> subdomains. We can factor those subdomains and then adaptively select
> additional vertex or integral constraints to enrich the coarse space. As
> soon as we have enough constraints to make the subdomain nonsingular (i.e.
> we would usually start nonsingular), we can enforce all additional
> constraints with Lagrange multipliers so that the factorization does not
> need to be repeated.
>
> Once we have enriched the coarse space, we have a choice of whether to
> apply a change of basis and re-factor or to reuse the current factorization
> and keep enforcing the enrichment by Lagrange multipliers.
>

Do you mean assembling the matrix on preselected vertices during
MatAssemblyBegin/End? Note that this will imply that standard
Neumann-Neumann methods will not work (they need the unassembled matrix to
solve for the local Schur complements).



-- 
Stefano
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20120605/2f2ca450/attachment.html>


More information about the petsc-dev mailing list