[petsc-users] PetscInt overflow

Matthew Knepley knepley at gmail.com
Wed Oct 24 11:46:34 CDT 2018


As it says, if you are looking at performance, you should configure using
--with-debugging=0.

It says SLEPc is using 8GB. Is this what you see?

  Matt

On Wed, Oct 24, 2018 at 12:30 PM Jan Grießer <griesser.jan at googlemail.com>
wrote:

> I also run it with the -log_summary :
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
>
>
>
>       ##########################################################
>       #                                                        #
>       #                          WARNING!!!                    #
>       #                                                        #
>       #   This code was compiled with a debugging option,      #
>       #   To get timing results run ./configure                #
>       #   using --with-debugging=no, the performance will      #
>       #   be generally two or three times faster.              #
>       #                                                        #
>       ##########################################################
>
>
> /work/ws/nemo/fr_jg1080-FreeSurface_Glass-0/GlassSystems/PeriodicSystems/N500000T0.001/SolveEigenvalueProblem_par/Test/Eigensolver_petsc_slepc_no_argparse.py
> on a arch-linux2-c-debug named int02.nemo.privat with 20 processors, by
> fr_jg1080 Wed Oct 24 18:26:30 2018
> Using Petsc Release Version 3.9.4, Sep, 11, 2018
>
>                          Max       Max/Min        Avg      Total
> Time (sec):           7.474e+02      1.00000   7.474e+02
> Objects:              3.600e+01      1.00000   3.600e+01
> Flop:                 1.090e+11      1.00346   1.089e+11  2.177e+12
> Flop/sec:            1.459e+08      1.00346   1.457e+08  2.913e+09
> Memory:               3.950e+08      1.00296              7.891e+09
> MPI Messages:         3.808e+04      1.00000   3.808e+04  7.615e+05
> MPI Message Lengths:  2.211e+10      1.00217   5.802e+05  4.419e+11
> MPI Reductions:       1.023e+05      1.00000
>
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
>                             e.g., VecAXPY() for real vectors of length N
> --> 2N flop
>                             and VecAXPY() for complex vectors of length N
> --> 8N flop
>
> Summary of Stages:   ----- Time ------  ----- Flop -----  --- Messages
> ---  -- Message Lengths --  -- Reductions --
>                         Avg     %Total     Avg     %Total   counts
>  %Total     Avg         %Total   counts   %Total
>  0:      Main Stage: 7.4739e+02 100.0%  2.1773e+12 100.0%  7.615e+05
> 100.0%  5.802e+05      100.0%  1.022e+05 100.0%
>
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
>    Count: number of times phase was executed
>    Time and Flop: Max - maximum over all processors
>                    Ratio - ratio of maximum to minimum over all processors
>    Mess: number of messages sent
>    Avg. len: average message length (bytes)
>    Reduct: number of global reductions
>    Global: entire computation
>    Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
>       %T - percent time in this phase         %F - percent flop in this
> phase
>       %M - percent messages in this phase     %L - percent message lengths
> in this phase
>       %R - percent reductions in this phase
>    Total Mflop/s: 10e-6 * (sum of flop over all processors)/(max time over
> all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
>
>
>       ##########################################################
>       #                                                        #
>       #                          WARNING!!!                    #
>       #                                                        #
>       #   This code was compiled with a debugging option,      #
>       #   To get timing results run ./configure                #
>       #   using --with-debugging=no, the performance will      #
>       #   be generally two or three times faster.              #
>       #                                                        #
>       ##########################################################
>
>
> Event                Count      Time (sec)     Flop
>      --- Global ---  --- Stage ---   Total
>                    Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len
> Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
>
> --- Event Stage 0: Main Stage
>
> BuildTwoSidedF         2 1.0 2.6670e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecSet                 2 1.0 6.8650e-03 1.8 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> VecScatterBegin     2002 1.0 1.4380e+01 1.0 0.00e+00 0.0 7.6e+05 5.8e+05
> 0.0e+00  2  0100100  0   2  0100100  0     0
> VecScatterEnd       2002 1.0 3.7604e+01 1.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  4  0  0  0  0   4  0  0  0  0     0
> VecSetRandom           1 1.0 1.6440e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> MatMult             2002 1.0 6.0846e+02 1.2 1.03e+11 1.0 7.6e+05 5.8e+05
> 0.0e+00 71 94100100  0  71 94100100  0  3376
> MatAssemblyBegin       3 1.0 2.8129e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+00  0  0  0  0  0   0  0  0  0  0     0
> MatAssemblyEnd         3 1.0 8.5094e+00 1.0 0.00e+00 0.0 7.6e+02 1.5e+05
> 3.6e+01  1  0  0  0  0   1  0  0  0  0     0
> EPSSetUp               1 1.0 1.7351e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 7.6e+01  0  0  0  0  0   0  0  0  0  0     0
> EPSSolve               1 1.0 6.7891e+02 1.0 1.09e+11 1.0 7.6e+05 5.8e+05
> 1.0e+05 91100100100100  91100100100100  3207
> STSetUp                1 1.0 2.2221e-04 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+00  0  0  0  0  0   0  0  0  0  0     0
> STApply             2002 1.0 6.0879e+02 1.2 1.03e+11 1.0 7.6e+05 5.8e+05
> 0.0e+00 71 94100100  0  71 94100100  0  3374
> BVCopy               999 1.0 2.7157e-01 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> BVMultVec           4004 1.0 2.2918e+00 1.0 2.10e+09 1.0 0.0e+00 0.0e+00
> 1.6e+04  0  2  0  0 16   0  2  0  0 16 18332
> BVMultInPlace        999 1.0 4.8399e+01 1.0 1.20e+09 1.0 0.0e+00 0.0e+00
> 0.0e+00  6  1  0  0  0   6  1  0  0  0   495
> BVDotVec            4004 1.0 1.0835e+01 1.0 2.70e+09 1.0 0.0e+00 0.0e+00
> 2.0e+04  1  2  0  0 20   1  2  0  0 20  4986
> BVOrthogonalizeV    2003 1.0 1.3272e+01 1.0 4.80e+09 1.0 0.0e+00 0.0e+00
> 5.2e+04  2  4  0  0 51   2  4  0  0 51  7236
> BVScale             2003 1.0 2.3521e-01 1.0 1.50e+08 1.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0 12773
> BVSetRandom            1 1.0 1.6456e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 4.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DSSolve             1000 1.0 3.3338e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DSVectors           1000 1.0 6.0029e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
> DSOther             2999 1.0 7.8770e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
>
> ------------------------------------------------------------------------------------------------------------------------
>
> Memory usage is given in bytes:
>
> Object Type          Creations   Destructions     Memory  Descendants' Mem.
> Reports information only for process 0.
>
> --- Event Stage 0: Main Stage
>
>               Viewer     2              1          840     0.
>          PetscRandom     1              1          646     0.
>            Index Set     4              4      5510472     0.
>               Vector     9              9     11629608     0.
>          Vec Scatter     2              2         1936     0.
>               Matrix    10             10    331855732     0.
>       Preconditioner     1              1         1000     0.
>        Krylov Solver     1              1         1176     0.
>           EPS Solver     1              1         1600     0.
>   Spectral Transform     2              2         1624     0.
>        Basis Vectors     1              1         2168     0.
>        Direct Solver     1              1         2520     0.
>               Region     1              1          672     0.
>
> ========================================================================================================================
> Average time to get PetscTime(): 1.19209e-07
> Average time for MPI_Barrier(): 2.67982e-05
> Average time for zero size MPI_Send(): 1.08957e-05
> #PETSc Option Table entries:
> -bv_type mat
> -eps_view_pre
> -log_summary
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure options: --with-cc=mpicc --with-cxx=mpicxx --with-fc=mpif90
> --download-mumps --with-shared-libraries=True --download-scalapack
> -----------------------------------------
> Libraries compiled on 2018-10-17 20:02:31 on login2.nemo.privat
> Machine characteristics:
> Linux-3.10.0-693.21.1.el7.x86_64-x86_64-with-centos-7.4.1708-Core
> Using PETSc directory: /home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4
> Using PETSc arch: arch-linux2-c-debug
> -----------------------------------------
>
> Using C compiler: mpicc  -fPIC  -wd1572 -g
> Using Fortran compiler: mpif90  -fPIC -g
> -----------------------------------------
>
> Using include paths:
> -I/home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4/include
> -I/home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4/arch-linux2-c-debug/include
> -----------------------------------------
>
> Using C linker: mpicc
> Using Fortran linker: mpif90
> Using libraries:
> -Wl,-rpath,/home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4/arch-linux2-c-debug/lib
> -L/home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4/arch-linux2-c-debug/lib
> -lpetsc
> -Wl,-rpath,/home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4/arch-linux2-c-debug/lib
> -L/home/fr/fr_fr/fr_jg1080/Libaries/petsc-3.9.4/arch-linux2-c-debug/lib
> -Wl,-rpath,/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries/linux/mpi/intel64/lib/debug_mt
> -L/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries/linux/mpi/intel64/lib/debug_mt
> -Wl,-rpath,/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries/linux/mpi/intel64/lib
> -L/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries/linux/mpi/intel64/lib
> -Wl,-rpath,/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries/linux/lib/intel64
> -L/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries/linux/lib/intel64
> -Wl,-rpath,/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin
> -L/opt/bwhpc/common/compiler/intel/2018.3.222/compilers_and_libraries_2018.3.222/linux/compiler/lib/intel64_lin
> -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.8.5
> -L/usr/lib/gcc/x86_64-redhat-linux/4.8.5
> -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib/debug_mt
> -Wl,-rpath,/opt/intel/mpi-rt/2017.0.0/intel64/lib -lcmumps -ldmumps
> -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -llapack -lblas -lX11
> -lstdc++ -ldl -lmpifort -lmpi -lmpigi -lrt -lpthread -lifport
> -lifcoremt_pic -limf -lsvml -lm -lipgo -lirc -lgcc_s -lirc_s -lstdc++ -ldl
> -----------------------------------------
>
>
>
>       ##########################################################
>       #                                                        #
>       #                          WARNING!!!                    #
>       #                                                        #
>       #   This code was compiled with a debugging option,      #
>       #   To get timing results run ./configure                #
>       #   using --with-debugging=no, the performance will      #
>       #   be generally two or three times faster.              #
>       #                                                        #
>       ##########################################################
>
>
> Am Mi., 24. Okt. 2018 um 18:07 Uhr schrieb Jan Grießer <
> griesser.jan at googlemail.com>:
>
>> For some reason i get only this error message, also when is use the
>> -eps_view_pre. I started the program with nev=1, there the output is  (with
>> -bv_type vecs -bv_type mat -eps_view_pre)
>> EPS Object: 20 MPI processes
>>   type: krylovschur
>>     50% of basis vectors kept after restart
>>     using the locking variant
>>   problem type: symmetric eigenvalue problem
>>   selected portion of the spectrum: smallest real parts
>>   number of eigenvalues (nev): 1
>>   number of column vectors (ncv): 3
>>   maximum dimension of projected problem (mpd): 2
>>   maximum number of iterations: 1000
>>   tolerance: 1e-08
>>   convergence test: relative to the eigenvalue
>> BV Object: 20 MPI processes
>>   type: mat
>>   4 columns of global length 1500000
>>   vector orthogonalization method: classical Gram-Schmidt
>>   orthogonalization refinement: if needed (eta: 0.7071)
>>   block orthogonalization method: GS
>>   doing matmult as a single matrix-matrix product
>> DS Object: 20 MPI processes
>>   type: hep
>>   parallel operation mode: REDUNDANT
>>   solving the problem with: Implicit QR method (_steqr)
>> ST Object: 20 MPI processes
>>   type: shift
>>   shift: 0.
>>   number of matrices: 1
>>
>>
>>
>>
>> Am Mi., 24. Okt. 2018 um 16:14 Uhr schrieb Matthew Knepley <
>> knepley at gmail.com>:
>>
>>> On Wed, Oct 24, 2018 at 10:03 AM Jan Grießer <
>>> griesser.jan at googlemail.com> wrote:
>>>
>>>> This is the error message i get from my program:
>>>> Shape of the dynamical matrix:  (1500000, 1500000)
>>>>
>>>
>>> Petsc installs a signal handler, so there should be a PETSc-specific
>>> message before this one. Is something eating
>>> your output?
>>>
>>>   Thanks,
>>>
>>>      Matt
>>>
>>>
>>>>
>>>> ===================================================================================
>>>> =   BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
>>>> =   PID 122676 RUNNING AT n3512.nemo.privat
>>>> =   EXIT CODE: 9
>>>> =   CLEANING UP REMAINING PROCESSES
>>>> =   YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
>>>>
>>>> ===================================================================================
>>>>    Intel(R) MPI Library troubleshooting guide:
>>>>       https://software.intel.com/node/561764
>>>>
>>>> ===================================================================================
>>>>
>>>>
>>>> Am Mi., 24. Okt. 2018 um 16:01 Uhr schrieb Matthew Knepley <
>>>> knepley at gmail.com>:
>>>>
>>>>> On Wed, Oct 24, 2018 at 9:38 AM Jan Grießer <
>>>>> griesser.jan at googlemail.com> wrote:
>>>>>
>>>>>> Hey,
>>>>>> i tried to run my program as you said with -bv_type vecs and/or
>>>>>> -bv_type mat, but instead of the PETScInt overflow i now get an MPI Error 9
>>>>>>
>>>>>
>>>>> Send the actual error.
>>>>>
>>>>>   Thanks,
>>>>>
>>>>>     Matt
>>>>>
>>>>>
>>>>>> message, which i assume (after googling a little bit around) should
>>>>>> be a memory problem. I tried to run it also on slightly bigger compute
>>>>>> nodes on our cluster with 20 cores with each 12 GB and 24 GB but the
>>>>>> problem still remains. The actual limit appears to be a 1.5 Million x 1.5
>>>>>> Million where i  searched for NEV = 1500 and MPD = 500/ 200 eigenvalues.
>>>>>> Do you have maybe an idea what the error could be? I appended also
>>>>>> the python method i used to solve the problem. I also tried to solve the
>>>>>> problem with spectrum solving but the error message remains the same.
>>>>>>
>>>>>> def solve_eigensystem(DynMatrix_nn, NEV, MPD, Dimension):
>>>>>> # Create the solver
>>>>>> # E is used as an acronym for the EPS solver (EPS = Eigenvalue
>>>>>> problem solver)
>>>>>> E = SLEPc.EPS().create()
>>>>>>
>>>>>> # Set the preconditioner
>>>>>> pc = PETSc.PC().create()
>>>>>> pc.setType(pc.Type.BJACOBI)
>>>>>>
>>>>>> # Set the linear solver
>>>>>> # Create the KSP object
>>>>>> ksp = PETSc.KSP().create()
>>>>>> # Create the solver, in this case GMRES
>>>>>> ksp.setType(ksp.Type.GMRES)
>>>>>> # Set the tolerances of the GMRES solver
>>>>>>         # Link it to the PC
>>>>>> ksp.setPC(pc)
>>>>>>
>>>>>> # Set up the spectral transformations
>>>>>> st = SLEPc.ST().create()
>>>>>> st.setType("shift")
>>>>>> st.setKSP(ksp)
>>>>>> # MPD stands for "maximum projected dimension". It has to due with
>>>>>> computational cost, please read Chap. 2.6.5 of SLEPc docu for
>>>>>> # an explanation. At the moment mpd is only a guess
>>>>>> E.setDimensions(nev=NEV, mpd = MPD)
>>>>>> # Eigenvalues should be real, therefore we start to order them from
>>>>>> the smallest real value |l.real|
>>>>>> E.setWhichEigenpairs(E.Which.SMALLEST_REAL)
>>>>>> # Since the dynamical matrix is symmetric and real it is hermitian
>>>>>> E.setProblemType(SLEPc.EPS.ProblemType.HEP)
>>>>>> # Use the Krylov Schur method to solve the eigenvalue problem
>>>>>> E.setType(E.Type.KRYLOVSCHUR)
>>>>>> # Set the convergence criterion to relative to the eigenvalue and the
>>>>>> maximal number of iterations
>>>>>> E.setConvergenceTest(E.Conv.REL)
>>>>>> E.setTolerances(tol = 1e-8, max_it = 5000)
>>>>>> # Set the matrix in order to solve
>>>>>> E.setOperators(DynMatrix_nn, None)
>>>>>> # Sets EPS options from the options database. This routine must be
>>>>>> called before `setUp()` if the user is to be allowed to set dthe solver
>>>>>> type.
>>>>>> E.setFromOptions()
>>>>>> # Sets up all the internal data structures necessary for the
>>>>>> execution of the eigensolver.
>>>>>> E.setUp()
>>>>>>
>>>>>> # Solve eigenvalue problem
>>>>>> E.solve()
>>>>>>
>>>>>> Print = PETSc.Sys.Print
>>>>>>
>>>>>> Print()
>>>>>> Print("****************************")
>>>>>> Print("***SLEPc Solution Results***")
>>>>>> Print("****************************")
>>>>>>
>>>>>> its = E.getIterationNumber()
>>>>>> Print("Number of iterations of the method: ", its)
>>>>>> eps_type = E.getType()
>>>>>> Print("Solution method: ", eps_type)
>>>>>> nev, ncv, mpd = E.getDimensions()
>>>>>> Print("Number of requested eigenvalues: ", nev)
>>>>>> Print("Number of computeded eigenvectors: ", ncv)
>>>>>> tol, maxit = E.getTolerances()
>>>>>> Print("Stopping condition: (tol, maxit)", (tol, maxit))
>>>>>> # Get the type of convergence
>>>>>> conv_test = E.getConvergenceTest()
>>>>>> Print("Selected convergence test: ", conv_test)
>>>>>> # Get the used spectral transformation
>>>>>> get_st = E.getST()
>>>>>> Print("Selected spectral transformation: ", get_st)
>>>>>> # Get the applied direct solver
>>>>>> get_ksp = E.getDS()
>>>>>> Print("Selected direct solver: ", get_ksp)
>>>>>> nconv = E.getConverged()
>>>>>> Print("Number of converged eigenpairs: ", nconv)
>>>>>> .....
>>>>>>
>>>>>>
>>>>>>
>>>>>> Am Fr., 19. Okt. 2018 um 21:00 Uhr schrieb Smith, Barry F. <
>>>>>> bsmith at mcs.anl.gov>:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> > On Oct 19, 2018, at 7:56 AM, Zhang, Junchao <jczhang at mcs.anl.gov>
>>>>>>> wrote:
>>>>>>> >
>>>>>>> >
>>>>>>> > On Fri, Oct 19, 2018 at 4:02 AM Jan Grießer <
>>>>>>> griesser.jan at googlemail.com> wrote:
>>>>>>> > With more than 1 MPI process you mean i should use spectrum
>>>>>>> slicing in divide the full problem in smaller subproblems?
>>>>>>> > The --with-64-bit-indices is not a possibility for me since i
>>>>>>> configured petsc with mumps, which does not allow to use the 64-bit version
>>>>>>> (At least this was the error message when i tried to configure PETSc )
>>>>>>> >
>>>>>>> > MUMPS 5.1.2 manual chapter 2.4.2 says it supports "Selective
>>>>>>> 64-bit integer feature" and "full 64-bit integer version" as well.
>>>>>>>
>>>>>>>     They use to achieve this by compiling with special Fortran flags
>>>>>>> to promote integers to 64 bit; this is too fragile for our taste so we
>>>>>>> never hooked PETSc up wit it. If they have a dependable way of using 64 bit
>>>>>>> integers we should add that to our mumps.py and test it.
>>>>>>>
>>>>>>>    Barry
>>>>>>>
>>>>>>> >
>>>>>>> > Am Mi., 17. Okt. 2018 um 18:24 Uhr schrieb Jose E. Roman <
>>>>>>> jroman at dsic.upv.es>:
>>>>>>> > To use BVVECS just add the command-line option -bv_type vecs
>>>>>>> > This causes to use a separate Vec for each column, instead of a
>>>>>>> single long Vec of size n*m. But it is considerably slower than the default.
>>>>>>> >
>>>>>>> > Anyway, for such large problems you should consider using more
>>>>>>> than 1 MPI process. In that case the error may disappear because the local
>>>>>>> size is smaller than 768000.
>>>>>>> >
>>>>>>> > Jose
>>>>>>> >
>>>>>>> >
>>>>>>> > > El 17 oct 2018, a las 17:58, Matthew Knepley <knepley at gmail.com>
>>>>>>> escribió:
>>>>>>> > >
>>>>>>> > > On Wed, Oct 17, 2018 at 11:54 AM Jan Grießer <
>>>>>>> griesser.jan at googlemail.com> wrote:
>>>>>>> > > Hi all,
>>>>>>> > > i am using slepc4py and petsc4py to solve for the smallest real
>>>>>>> eigenvalues and eigenvectors. For my test cases with a matrix A of the size
>>>>>>> 30k x 30k solving for the smallest soutions works quite well, but when i
>>>>>>> increase the dimension of my system to around A = 768000 x 768000 or 3
>>>>>>> million x 3 million and ask for the smallest real 3000 (the number is
>>>>>>> increasing with increasing system size) eigenvalues and eigenvectors i get
>>>>>>> the output (for the 768000):
>>>>>>> > >  The product 4001 times 768000 overflows the size of PetscInt;
>>>>>>> consider reducing the number of columns, or use BVVECS instead
>>>>>>> > > i understand that the requested number of eigenvectors and
>>>>>>> eigenvalues is causing an overflow but i do not understand the solution of
>>>>>>> the problem which is stated in the error message. Can someone tell me what
>>>>>>> exactly BVVECS is and how i can use it? Or is there any other solution to
>>>>>>> my problem ?
>>>>>>> > >
>>>>>>> > > You can also reconfigure with 64-bit integers:
>>>>>>> --with-64-bit-indices
>>>>>>> > >
>>>>>>> > >   Thanks,
>>>>>>> > >
>>>>>>> > >     Matt
>>>>>>> > >
>>>>>>> > > Thank you very much in advance,
>>>>>>> > > Jan
>>>>>>> > >
>>>>>>> > >
>>>>>>> > >
>>>>>>> > > --
>>>>>>> > > What most experimenters take for granted before they begin their
>>>>>>> experiments is infinitely more interesting than any results to which their
>>>>>>> experiments lead.
>>>>>>> > > -- Norbert Wiener
>>>>>>> > >
>>>>>>> > > https://www.cse.buffalo.edu/~knepley/
>>>>>>> >
>>>>>>>
>>>>>>>
>>>>>
>>>>> --
>>>>> What most experimenters take for granted before they begin their
>>>>> experiments is infinitely more interesting than any results to which their
>>>>> experiments lead.
>>>>> -- Norbert Wiener
>>>>>
>>>>> https://www.cse.buffalo.edu/~knepley/
>>>>> <http://www.cse.buffalo.edu/~knepley/>
>>>>>
>>>>
>>>
>>> --
>>> What most experimenters take for granted before they begin their
>>> experiments is infinitely more interesting than any results to which their
>>> experiments lead.
>>> -- Norbert Wiener
>>>
>>> https://www.cse.buffalo.edu/~knepley/
>>> <http://www.cse.buffalo.edu/~knepley/>
>>>
>>

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener

https://www.cse.buffalo.edu/~knepley/ <http://www.cse.buffalo.edu/~knepley/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20181024/f10e063b/attachment-0001.html>


More information about the petsc-users mailing list