[petsc-users] PetscInt overflow

Jose E. Roman jroman at dsic.upv.es
Wed Oct 24 09:13:23 CDT 2018


The program seems correct, although in your case you don't need any KSP or preconditioner, so you could remove all code related to PC, KSP and ST - anyway, this should not affect memory consumption.

Add the option -eps_view_pre and send the output.
Jose


> El 24 oct 2018, a las 16:03, Jan Grießer <griesser.jan at googlemail.com> escribió:
> 
> This is the error message i get from my program: 
> Shape of the dynamical matrix:  (1500000, 1500000)
> 
> ===================================================================================
> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
> =   PID 122676 RUNNING AT n3512.nemo.privat
> =   EXIT CODE: 9
> =   CLEANING UP REMAINING PROCESSES
> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
> ===================================================================================
>    Intel(R) MPI Library troubleshooting guide:
>       https://software.intel.com/node/561764
> ===================================================================================
> 
> 
> Am Mi., 24. Okt. 2018 um 16:01 Uhr schrieb Matthew Knepley <knepley at gmail.com>:
> On Wed, Oct 24, 2018 at 9:38 AM Jan Grießer <griesser.jan at googlemail.com> wrote:
> Hey,
> i tried to run my program as you said with -bv_type vecs and/or -bv_type mat, but instead of the PETScInt overflow i now get an MPI Error 9
> 
> Send the actual error.
> 
>   Thanks,
> 
>     Matt
>  
> message, which i assume (after googling a little bit around) should be a memory problem. I tried to run it also on slightly bigger compute nodes on our cluster with 20 cores with each 12 GB and 24 GB but the problem still remains. The actual limit appears to be a 1.5 Million x 1.5 Million where i  searched for NEV = 1500 and MPD = 500/ 200 eigenvalues. 
> Do you have maybe an idea what the error could be? I appended also the python method i used to solve the problem. I also tried to solve the problem with spectrum solving but the error message remains the same. 
> 
> def solve_eigensystem(DynMatrix_nn, NEV, MPD, Dimension):
> 	# Create the solver 
> 	# E is used as an acronym for the EPS solver (EPS = Eigenvalue problem solver)
> 	E = SLEPc.EPS().create()
> 
> 	# Set the preconditioner 
> 	pc = PETSc.PC().create()
> 	pc.setType(pc.Type.BJACOBI)
> 
> 	# Set the linear solver 
> 	# Create the KSP object
> 	ksp = PETSc.KSP().create()
> 	# Create the solver, in this case GMRES
> 	ksp.setType(ksp.Type.GMRES)
> 	# Set the tolerances of the GMRES solver
>         # Link it to the PC
> 	ksp.setPC(pc)
> 
> 	# Set up the spectral transformations 
> 	st = SLEPc.ST().create()
> 	st.setType("shift")
> 	st.setKSP(ksp)
> 	
> 	# MPD stands for "maximum projected dimension". It has to due with computational cost, please read Chap. 2.6.5 of SLEPc docu for 
> 	# an explanation. At the moment mpd is only a guess
> 	E.setDimensions(nev=NEV, mpd = MPD)
> 	# Eigenvalues should be real, therefore we start to order them from the smallest real value |l.real|
> 	E.setWhichEigenpairs(E.Which.SMALLEST_REAL)
> 	# Since the dynamical matrix is symmetric and real it is hermitian 
> 	E.setProblemType(SLEPc.EPS.ProblemType.HEP)
> 	# Use the Krylov Schur method to solve the eigenvalue problem
> 	E.setType(E.Type.KRYLOVSCHUR)
> 	# Set the convergence criterion to relative to the eigenvalue and the maximal number of iterations
> 	E.setConvergenceTest(E.Conv.REL)
> 	E.setTolerances(tol = 1e-8, max_it = 5000)
> 	# Set the matrix in order to solve 
> 	E.setOperators(DynMatrix_nn, None)
> 	# Sets EPS options from the options database. This routine must be called before `setUp()` if the user is to be allowed to set dthe solver type.
> 	E.setFromOptions()
> 	# Sets up all the internal data structures necessary for the execution of the eigensolver.
> 	E.setUp()
> 
> 	# Solve eigenvalue problem     
> 	E.solve()
> 
> 	Print = PETSc.Sys.Print
> 
> 	Print()
> 	Print("****************************")
> 	Print("***SLEPc Solution Results***")
> 	Print("****************************")
> 
> 	its = E.getIterationNumber()
> 	Print("Number of iterations of the method: ", its)
> 	eps_type = E.getType()
> 	Print("Solution method: ", eps_type)
> 	nev, ncv, mpd = E.getDimensions()
> 	Print("Number of requested eigenvalues: ", nev)
> 	Print("Number of computeded eigenvectors: ", ncv)
> 	tol, maxit = E.getTolerances()
> 	Print("Stopping condition: (tol, maxit)", (tol, maxit))
> 	# Get the type of convergence
> 	conv_test = E.getConvergenceTest()
> 	Print("Selected convergence test: ", conv_test)
> 	# Get the used spectral transformation
> 	get_st = E.getST()
> 	Print("Selected spectral transformation: ", get_st)
> 	# Get the applied direct solver
> 	get_ksp = E.getDS()
> 	Print("Selected direct solver: ", get_ksp)
> 	nconv = E.getConverged()
> 	Print("Number of converged eigenpairs: ", nconv)
> .....
> 
> 
> 
> Am Fr., 19. Okt. 2018 um 21:00 Uhr schrieb Smith, Barry F. <bsmith at mcs.anl.gov>:
> 
> 
> > On Oct 19, 2018, at 7:56 AM, Zhang, Junchao <jczhang at mcs.anl.gov> wrote:
> > 
> > 
> > On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer <griesser.jan at googlemail.com> wrote:
> > With more than 1 MPI process you mean i should use spectrum slicing in divide the full problem in smaller subproblems? 
> > The --with-64-bit-indices is not a possibility for me since i configured petsc with mumps, which does not allow to use the 64-bit version (At least this was the error message when i tried to configure PETSc )
> >  
> > MUMPS 5.1.2 manual chapter 2.4.2 says it supports "Selective 64-bit integer feature" and "full 64-bit integer version" as well. 
> 
>     They use to achieve this by compiling with special Fortran flags to promote integers to 64 bit; this is too fragile for our taste so we never hooked PETSc up wit it. If they have a dependable way of using 64 bit integers we should add that to our mumps.py and test it.
> 
>    Barry
> 
> > 
> > Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman <jroman at dsic.upv.es>:
> > To use BVVECS just add the command-line option -bv_type vecs
> > This causes to use a separate Vec for each column, instead of a single long Vec of size n*m. But it is considerably slower than the default.
> > 
> > Anyway, for such large problems you should consider using more than 1 MPI process. In that case the error may disappear because the local size is smaller than 768000.
> > 
> > Jose
> > 
> > 
> > > El 17 oct 2018, a las 17:58, Matthew Knepley <knepley at gmail.com> escribió:
> > > 
> > > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer <griesser.jan at googlemail.com> wrote:
> > > Hi all,
> > > i am using slepc4py and petsc4py to solve for the smallest real eigenvalues and eigenvectors. For my test cases with a matrix A of the size 30k x 30k solving for the smallest soutions works quite well, but when i increase the dimension of my system to around A = 768000 x 768000 or 3 million x 3 million and ask for the smallest real 3000 (the number is increasing with increasing system size) eigenvalues and eigenvectors i get the output (for the 768000): 
> > >  The product 4001 times 768000 overflows the size of PetscInt; consider reducing the number of columns, or use BVVECS instead
> > > i understand that the requested number of eigenvectors and eigenvalues is causing an overflow but i do not understand the solution of the problem which is stated in the error message. Can someone tell me what exactly BVVECS is and how i can use it? Or is there any other solution to my problem ?
> > > 
> > > You can also reconfigure with 64-bit integers: --with-64-bit-indices
> > > 
> > >   Thanks,
> > > 
> > >     Matt
> > >  
> > > Thank you very much in advance,
> > > Jan 
> > > 
> > > 
> > > 
> > > -- 
> > > What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> > > -- Norbert Wiener
> > > 
> > > https://www.cse.buffalo.edu/~knepley/
> > 
> 
> 
> 
> -- 
> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
> -- Norbert Wiener
> 
> https://www.cse.buffalo.edu/~knepley/



More information about the petsc-users mailing list