[petsc-users] Direct Schur complement domain decomposition
Stefan Kurzbach
stefan.kurzbach at tu-harburg.de
Tue Jan 1 06:35:47 CST 2013
Dear Jed
Thanks. Non-iterative or direct substructuring is my understanding as
well. You said this is the same as multifrontal factorization as well.
Could you point me to some source where I can see the parallels? Maybe
this is obvious for people who have grown up with solving sparse
systems, but not for me :)
I will have to spend some more time to find out about the other hints
you gave.
Best regards
Stefan
Am 29.12.2012 19:59, schrieb Jed Brown:
> Sorry for the slow reply. What you are describing _is_ multifrontal
> factorization, or alternatively, (non-iterative) substructuring. It is
> a direct solve and boils down to a few large dense direct solves.
> Incomplete factorization is one way of preventing the Schur
> complements from getting too dense, but it's not very reliable.
>
> There are many other ways of retaining structure in the supernodes
> (i.e., avoid unstructured dense matrices), at the expense of some
> error. These methods "compress" the Schur complement using low-rank
> representations for long-range interaction. These are typically
> combined with an iterative method.
>
> Multigrid and multilevel DD methods can be thought of as an alternate
> way to compress (approximately) the long-range interaction coming from
> inexact elimination (dimensional reduction of interfaces).
>
> On Wed, Dec 19, 2012 at 10:25 AM, Stefan Kurzbach
> <stefan.kurzbach at tuhh.de <mailto:stefan.kurzbach at tuhh.de>> wrote:
>
> Hello everybody,
>
> in my recent research on parallelization of a 2D unstructured flow
> model code I came upon a question on domain decomposition
> techniques in “grids”. Maybe someone knows of any previous results
> on this?
>
> Typically, when doing large simulations with many unknowns, the
> problem is distributed to many computer nodes and solved in
> parallel by some iterative method. Many of these iterative methods
> boil down to a large number of distributed matrix-vector
> multiplications (in the order of the number of iterations). This
> means there are many synchronization points in the algorithms,
> which makes them tightly coupled. This has been found to work well
> on clusters with fast networks.
>
> Now my question:
>
> What if there is a small number of very powerful nodes (say less
> than 10), which are connected by a slow network, e.g. several
> computer clusters connected over the internet (some people call
> this “grid computing”). I expect that the traditional iterative
> methods will not be as efficient here (any references?).
>
> My guess is that a solution method with fewer synchronization
> points will work better, even though that method may be
> computationally more expensive than traditional methods. An
> example would be a domain composition approach with direct
> solution of the Schur complement on the interface. This requires
> that the interface size has to be small compared to the subdomain
> size. As this algorithm basically works in three decoupled phases
> (solve the subdomains for several right hand sides, assemble and
> solve the Schur complement system, correct the subdomain results)
> it should be suited well, but I have no idea how to test or
> otherwise prove it. Has anybody made any thoughts on this before,
> possibly dating back to the 80ies and 90ies, where slow networks
> were more common?
>
> Best regards
>
> Stefan
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20130101/13edff18/attachment-0001.html>
More information about the petsc-users
mailing list