[petsc-users] Triple increasing of allocated memory during KSPSolve calling(GMRES preconditioned by ASM)

Smith, Barry F. bsmith at mcs.anl.gov
Mon Feb 3 10:55:36 CST 2020


   GMRES also can by default require about 35 work vectors if it reaches the full restart. You can set a smaller restart with -ksp_gmres_restart 15 for example but this can also hurt the convergence of GMRES dramatically. People sometimes use the KSPBCGS algorithm since it does not require all the restart vectors but it can also converge more slowly.

    Depending on how much memory the sparse matrices use relative to the vectors the vector memory may matter or not.

   If you are using a recent version of PETSc you can run with -log_view -log_view_memory and it will show on the right side of the columns how much memory is being allocated for each of the operations in various ways. 

   Barry


> On Feb 3, 2020, at 10:34 AM, Matthew Knepley <knepley at gmail.com> wrote:
> 
> On Mon, Feb 3, 2020 at 10:38 AM Дмитрий Мельничук <dmitry.melnichuk at geosteertech.com> wrote:
> Hello all!
> 
> Now I am faced with a problem associated with the memory allocation when calling of KSPSolve .
> 
> GMRES preconditioned by ASM for solving linear algebra system (obtained by the finite element spatial discretisation of Biot poroelasticity model) was chosen.
> According to the output value of PetscMallocGetCurrentUsage subroutine 176 MB for matrix and RHS vector storage is required (before KSPSolve calling).
> But during solving linear algebra system 543 MB of RAM is required (during KSPSolve calling).
> Thus, the amount of allocated memory after preconditioning stage increased three times. This kind of behaviour is critically for 3D models with several millions of cells.  
> 
> 1) In order to know anything, we have to see the output of -ksp_view, although I see you used an overlap of 2
> 
> 2) The overlap increases the size of submatrices beyond that of the original matrix. Suppose that you used LU for the sub-preconditioner.
>     You would need at least 2x memory (with ILU(0)) since the matrix dominates memory usage. Moreover, you have overlap
>     and you might have fill-in depending on the solver.
> 
> 3) The massif tool from valgrind is a good fine-grained way to look at memory allocation
>  
>   Thanks,
> 
>      Matt
> 
> Is there a way to decrease amout of allocated memory?
> Is that an expected behaviour for GMRES-ASM combination?
> 
> As I remember, using previous version of PETSc didn't demonstrate so significante memory increasing.  
> 
> ...
> Vec :: Vec_F, Vec_U
> Mat :: Mat_K
> ...
> ...
> call MatAssemblyBegin(Mat_M,Mat_Final_Assembly,ierr)
> call MatAssemblyEnd(Mat_M,Mat_Final_Assembly,ierr)
> ....
> call VecAssemblyBegin(Vec_F_mod,ierr)
> call VecAssemblyEnd(Vec_F_mod,ierr)
> ...
> ...
> call PetscMallocGetCurrentUsage(mem, ierr)
> print *,"Memory used: ",mem
> ...
> ...
> call KSPSetType(Krylov,KSPGMRES,ierr)
> call KSPGetPC(Krylov,PreCon,ierr)
> call PCSetType(PreCon,PCASM,ierr)
> call KSPSetFromOptions(Krylov,ierr)
> ...
> call KSPSolve(Krylov,Vec_F,Vec_U,ierr)
> ...
> ...
> options = "-pc_asm_overlap 2 -pc_asm_type basic -ksp_monitor -ksp_converged_reason"
>  
>  
> Kind regards,
> Dmitry Melnichuk
> Matrix.dat (265288024)
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



More information about the petsc-users mailing list