memory GMRES preconditioner

jordi poblet jordi.poblet at gmail.com
Thu Apr 12 11:51:22 CDT 2007


Dear all,

I think that the GMRES ILU(p) preconditioner in my program is consuming too
much memory. I just would like to check if it is correct or not and compare
this experience with other PETSc users.
For a quite small sparse matrix (dimension = 3500 and 2.5 Mb) the memory
consumed while the preconditioner is created is more or less:

ILU                                 level approximate increase of memory
waste (Mb)

4                                       9
8                                       26
12                                     43
16                                     53
20                                     68

Moreover a fast convergence is reached suddenly (i.e. for ILU level 25
convergence is not reached after 3000 iterations but for ILU level 30 the
convergence is reached in 7 iterations).
I suppose that some of the problems are caused by the large condition number
of the matrix but in any case I guess that the memory used by the
preconditioner is too large.
I have tried to improve the performance using: PCFactorSetFill,
PCFactorSetAllowDiagonalFill
and PCFactorSetUseDropTolerance but I have not succeed.

 I would also like to know how to estimate a priori the memory required for
a PETSc sparse matrix if the non null entries are known. Is the following
rule: 1 double/complex<double> and 2 integers per non null coefficient, a
good approximation (taking into account PETSc implementation)? Is the same
rule valid for the case of the preconditioner?

I will give some details on the particular situation being solved:
------------------------------------------------------------------------------------------------------------

Type of matrix: FEM resolution of an structural dynamic problem

(K + w*w*M) | L^T

------------------------- = System matrix

L                    | 0


 K == stiffness matrix (square banded)
M == consistent mass matrix (square banded, and same nonzero pattern than K)
w == pulsation of the problem (scalar)
L == Lagrange multipliers matrix (rectangular sparse)

The matrix is in general ill conditioned and non positive definite.

------------------------------------------------------------------------------------------------------------

 And my use of PETSc functions is as follows:

 ierr = KSPCreate(PETSC_COMM_WORLD,&MyKsp);CHKERRQ(ierr);

ierr = KSPSetType(MyKsp, KSPGMRES);CHKERRQ(ierr);

 ierr = KSPSetOperators(MyKsp,MyMat,MyMat,DIFFERENT_NONZERO_PATTERN);
CHKERRQ(ierr);


  ierr = KSPGetPC(MyKsp,&(MyPc));CHKERRQ(ierr);

   ierr = PCSetType(MyPc,PCILU);CHKERRQ(ierr);

ierr = PCFactorSetLevels(MyPc, LevelNumber);CHKERRQ(ierr);


 PetscReal realshift = 1.0;

ierr = PCFactorSetShiftNonzero(MyPc, realshift); CHKERRQ(ierr); //With and
without this line


 ierr =
KSPSetTolerances(MyKsp,tol,PETSC_DEFAULT,PETSC_DEFAULT,itmax);CHKERRQ(ierr);

ierr = KSPSetInitialGuessNonzero(MyKsp,PETSC_TRUE);CHKERRQ(ierr);


 ierr = KSPGMRESSetRestart(MyKsp, max_steps_restart); CHKERRQ(ierr); //With
and without this line


  ierr = KSPSetFromOptions(MyKsp);CHKERRQ(ierr);

  ierr = KSPSolve(MyKsp,MyVector,x);CHKERRQ(ierr);
-----------------------------------------------------------------------------------------------------------


Thank you very much in advance,


Jordi Poblet Puig
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20070412/a0fee3ff/attachment.htm>


More information about the petsc-users mailing list