[petsc-dev] Deflated Krylov solvers for PETSc
Barry Smith
bsmith at mcs.anl.gov
Sat Mar 2 11:46:20 CST 2013
On Mar 1, 2013, at 11:39 PM, Jie Chen <jiechen at mcs.anl.gov> wrote:
> Sometimes the convergence rate that is calculated based on the ratio between lambda_max and lambda_min is rather pessimistic. The distribution of eigenvalues plays a critical role in convergence. The common argument for the effectiveness of eigenvector deflation is that "a few extreme eigenvalues (usually the smallest ones in magnitude) hamper the convergence so deflating them is helpful." This does not sound clear enough to me, either. But I did have an experience where after a preconditioner is applied, the spectrum of the matrix is clustered, except for a few extreme eigenvalues. Deflating them significantly improves the convergence of CG. So I think the real magic is not about "50 out of 1 billion", but rather, about how the spectrum of the matrix changes when you combine deflation with preconditioning.
Yes, but you haven't explained yet how preconditioning is combined with deflation. How is that done?
Barry
>
> Jie
>
>
>
>
> ----- Original Message -----
> From: "Barry Smith" <bsmith at mcs.anl.gov>
> To: "For users of the development version of PETSc" <petsc-dev at mcs.anl.gov>
> Sent: Friday, March 1, 2013 9:52:25 PM
> Subject: Re: [petsc-dev] Deflated Krylov solvers for PETSc
>
>
> This has always been the biggest puzzler for deflation. Say one has a 1 billion unknown linear system; simple elliptic problem so the eigenvalues are distributed between lambda_min and lambda_max with the ratio of lambda_max over lambda_min is pretty big. Now deflate out 50 eigenvalues, so what? how can deflating out 50 eigenvalues even if they are the most extreme really affect the convergence rate very much? It is 50 out of 1 billion. Seems too magical to be believable?
>
> Barry
More information about the petsc-dev
mailing list