memory GMRES preconditioner
Matthew Knepley
knepley at gmail.com
Thu Apr 12 12:11:56 CDT 2007
On 4/12/07, jordi poblet <jordi.poblet at gmail.com> wrote:
>
>
> Dear all,
>
> I think that the GMRES ILU(p) preconditioner in my program is consuming too
> much memory. I just would like to check if it is correct or not and compare
> this experience with other PETSc users.
> For a quite small sparse matrix (dimension = 3500 and 2.5 Mb) the memory
> consumed while the preconditioner is created is more or less:
This memory usage does not really sound outrageous to me. The full matrix
would cost 3500*4 + 3500*3500*(8+4) = 147M. Using so many levels has to
be getting close to a full matrix.
In general, high levels of fill (I would say > 2) are
counterproductive with ILU. For
such as small problem, consider using a sparse direct solver like MUMPS. YOu
can get this automatically by configureing with --download-mumps. Until your
problems are at least 100K, I would say that this is one of the best options.
Matt
> ILU level approximate
> increase of memory waste (Mb)
>
> 4 9
> 8 26
> 12 43
> 16 53
> 20 68
>
>
> Moreover a fast convergence is reached suddenly (i.e. for ILU level 25
> convergence is not reached after 3000 iterations but for ILU level 30 the
> convergence is reached in 7 iterations).
> I suppose that some of the problems are caused by the large condition
> number of the matrix but in any case I guess that the memory used by the
> preconditioner is too large.
> I have tried to improve the performance using: PCFactorSetFill,
> PCFactorSetAllowDiagonalFill and PCFactorSetUseDropTolerance but I have not
> succeed.
>
>
>
> I would also like to know how to estimate a priori the memory required for a
> PETSc sparse matrix if the non null entries are known. Is the following
> rule: 1 double/complex<double> and 2 integers per non null coefficient, a
> good approximation (taking into account PETSc implementation)? Is the same
> rule valid for the case of the preconditioner?
>
> I will give some details on the particular situation being solved:
> ------------------------------------------------------------------------------------------------------------
>
> Type of matrix: FEM resolution of an structural dynamic problem
>
> (K + w*w*M) | L^T
>
> ------------------------- = System matrix
>
> L | 0
>
>
>
>
> K == stiffness matrix (square banded)
> M == consistent mass matrix (square banded, and same nonzero pattern than
> K)
> w == pulsation of the problem (scalar)
> L == Lagrange multipliers matrix (rectangular sparse)
>
>
> The matrix is in general ill conditioned and non positive definite.
>
> ------------------------------------------------------------------------------------------------------------
>
>
>
> And my use of PETSc functions is as follows:
>
> ierr = KSPCreate(PETSC_COMM_WORLD,&MyKsp);CHKERRQ(ierr);
>
> ierr = KSPSetType(MyKsp, KSPGMRES);CHKERRQ(ierr);
>
> ierr =
> KSPSetOperators(MyKsp,MyMat,MyMat,DIFFERENT_NONZERO_PATTERN);
> CHKERRQ(ierr);
>
>
>
>
> ierr = KSPGetPC(MyKsp,&(MyPc));CHKERRQ(ierr);
>
>
>
>
>
> ierr = PCSetType(MyPc,PCILU);CHKERRQ(ierr);
>
> ierr = PCFactorSetLevels(MyPc, LevelNumber);CHKERRQ(ierr);
>
>
>
>
> PetscReal realshift = 1.0;
>
> ierr = PCFactorSetShiftNonzero(MyPc, realshift); CHKERRQ(ierr); //With and
> without this line
>
>
>
>
> ierr =
> KSPSetTolerances(MyKsp,tol,PETSC_DEFAULT,PETSC_DEFAULT,itmax);CHKERRQ(ierr);
>
> ierr =
> KSPSetInitialGuessNonzero(MyKsp,PETSC_TRUE);CHKERRQ(ierr);
>
>
>
>
> ierr = KSPGMRESSetRestart(MyKsp, max_steps_restart); CHKERRQ(ierr); //With
> and without this line
>
>
>
>
> ierr = KSPSetFromOptions(MyKsp);CHKERRQ(ierr);
>
>
>
> ierr = KSPSolve(MyKsp,MyVector,x);CHKERRQ(ierr);
> -----------------------------------------------------------------------------------------------------------
>
> Thank you very much in advance,
>
>
> Jordi Poblet Puig
>
>
>
>
>
>
--
The government saving money is like me spilling beer. It happens, but
never on purpose.
More information about the petsc-users
mailing list