Mannually specify a diagonal matrix as a preconditioner?
Matthew Knepley
knepley at gmail.com
Mon Mar 31 16:26:37 CDT 2008
You do not need another matrix to do lumping. Just give the mass matrix in
both arguments, and then use
-pc_type jacobi -pc_jacobi_rowsum
Matt
On Mon, Mar 31, 2008 at 4:04 PM, Shi Jin <jinzishuai at yahoo.com> wrote:
>
> Hi there,
>
> I am trying to solve a mass matrix linear system by KSPSolve. Right now, I
> am passing the mass matrix itself (let's call it M) to KSPSetOperators() as
> the Pmat argument. In order to speed up the convergence, I have constructed
> the lumped mass matrix (named lumpedM). For a linear finite element, this is
> simply a diagonal matrix with entries equal to the sum of the row on M. It
> is a common practice to replace M with lumpedM to have faster convergence
> without losing the order of accuracy.
>
> What I want to do is to still solve the M matrix but use lumpedM to
> precondition it. This way hopefully the number of iterations would be
> greatly reduced. In Petsc code, I tried
>
> ierr = KSPSetOperators( solMP, M, lumpedM, SAME_PRECONDITIONER);
>
> However, instead of giving faster convergence, it actually takes more
> iterations to convergence than the regular one. Therefore, I wonder if
> setting lumpedM as Pmat is the correct way to do it. Could you please
> advice? I think right now lumpedM is taken as the input to compute the
> preconditioning matrix, using whatever method is specified by -pc_type .
> What I really want to do is to simply set lumpedM as the precondition
> matrix, without spending time to compute anything.
> Thank you very much.
>
> Shi
>
>
>
>
>
> --
> Shi Jin, PhD
>
>
> ________________________________
> Like movies? Here's a limited-time offer: Blockbuster Total Access for one
> month at no cost.
--
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which
their experiments lead.
-- Norbert Wiener
More information about the petsc-users
mailing list