[petsc-users] MUMPS error

venkatesh g venkateshgk.j at gmail.com
Mon May 18 08:29:08 CDT 2015


Hi I have attached the performance logs for 2 jobs on different processors.
I had to increase the workspace icntl(14) when I submit on more cores since
it is failing with small value of icntl(14).

1. performance_log1.txt is run on 8 cores (option given -mat_mumps_icntl_14
200)
2. performance_log2.txt is run on 2 cores (option given -mat_mumps_icntl_14
85  )

Venkatesh

On Sun, May 17, 2015 at 6:13 PM, Matthew Knepley <knepley at gmail.com> wrote:

> On Sun, May 17, 2015 at 1:38 AM, venkatesh g <venkateshgk.j at gmail.com>
> wrote:
>
>> Hi, Thanks for the information. I now increased the workspace by adding
>> '-mat_mumps_icntl_14 100'
>>
>> It works. However, the problem is, if I submit in 1 core I get the answer
>> in 200 secs, but with 4 cores and '-mat_mumps_icntl_14 100' it takes
>> 3500secs.
>>
>
> Send the output of -log_summary for all performance queries. Otherwise we
> are just guessing.
>
>     Matt
>
> My command line is: 'mpiexec -np 4 ./ex7 -f1 a2 -f2 b2 -eps_nev 1 -st_type
>> sinvert -eps_max_it 5000 -st_ksp_type preonly -st_pc_type lu
>> -st_pc_factor_mat_solver_package mumps -mat_mumps_icntl_14 100'
>>
>> Kindly let me know.
>>
>> Venkatesh
>>
>>
>>
>> On Sat, May 16, 2015 at 7:10 PM, David Knezevic <
>> david.knezevic at akselos.com> wrote:
>>
>>> On Sat, May 16, 2015 at 8:08 AM, venkatesh g <venkateshgk.j at gmail.com>
>>> wrote:
>>>
>>>> Hi,
>>>> I am trying to solving AX=lambda BX eigenvalue problem.
>>>>
>>>> A and B are of sizes 3600x3600
>>>>
>>>> I run with this command :
>>>>
>>>> 'mpiexec -np 4 ./ex7 -f1 a2 -f2 b2 -eps_nev 1 -st_type sinvert
>>>> -eps_max_it 5000 -st_ksp_type preonly -st_pc_type lu
>>>> -st_pc_factor_mat_solver_package mumps'
>>>>
>>>> I get this error: (I get result only when I give 1 or 2 processors)
>>>> Reading COMPLEX matrices from binary files...
>>>> [0]PETSC ERROR: --------------------- Error Message
>>>> ------------------------------------
>>>> [0]PETSC ERROR: Error in external library!
>>>> [0]PETSC ERROR: Error reported by MUMPS in numerical factorization
>>>> phase: INFO(1)=-9, INFO(2)=2024
>>>>
>>>
>>>
>>> The MUMPS error types are described in Chapter 7 of the MUMPS manual. In
>>> this case you have INFO(1)=-9, which is explained in the manual as:
>>>
>>> "–9 Main internal real/complex workarray S too small. If INFO(2) is
>>> positive, then the number of entries that are missing in S at the moment
>>> when the error is raised is available in INFO(2). If INFO(2) is negative,
>>> then its absolute value should be multiplied by 1 million. If an error –9
>>> occurs, the user should increase the value of ICNTL(14) before calling the
>>> factorization (JOB=2) again, except if ICNTL(23) is provided, in which case
>>> ICNTL(23) should be increased."
>>>
>>> This says that you should use ICTNL(14) to increase the working space
>>> size:
>>>
>>> "ICNTL(14) is accessed by the host both during the analysis and the
>>> factorization phases. It corresponds to the percentage increase in the
>>> estimated working space. When significant extra fill-in is caused by
>>> numerical pivoting, increasing ICNTL(14) may help. Except in special cases,
>>> the default value is 20 (which corresponds to a 20 % increase)."
>>>
>>> So, for example, you can avoid this error via the following command line
>>> argument to PETSc: "-mat_mumps_icntl_14 30", where 30 indicates that we
>>> allow a 30% increase in the workspace instead of the default 20%.
>>>
>>> David
>>>
>>>
>>>
>>
>
>
> --
> What most experimenters take for granted before they begin their
> experiments is infinitely more interesting than any results to which their
> experiments lead.
> -- Norbert Wiener
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150518/d2917e37/attachment-0001.html>
-------------- next part --------------
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./ex7 on a linux-gnu named compute-0-0.local with 8 processors, by venkatesh Mon May 18 23:46:29 2015
Using Petsc Release Version 3.5.3, Jan, 31, 2015 

                         Max       Max/Min        Avg      Total 
Time (sec):           6.437e+02      1.00000   6.437e+02
Objects:              6.200e+01      1.03333   6.100e+01
Flops:                2.592e+09      1.17536   2.284e+09  1.827e+10
Flops/sec:            4.027e+06      1.17536   3.548e+06  2.838e+07
MPI Messages:         6.246e+04      6.48624   2.163e+04  1.730e+05
MPI Message Lengths:  3.670e+08      5.23102   5.572e+03  9.643e+08
MPI Reductions:       2.732e+04      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 6.4373e+02 100.0%  1.8271e+10 100.0%  1.730e+05 100.0%  5.572e+03      100.0%  2.731e+04 100.0% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

MatMult             9604 1.0 1.8260e+0019.8 3.88e+08512.9 1.9e+04 6.4e+03 0.0e+00  0  3 11 13  0   0  3 11 13  0   346
MatSolve            9600 1.0 6.3221e+02 1.0 0.00e+00 0.0 1.5e+05 5.3e+03 9.6e+03 98  0 89 85 35  98  0 89 85 35     0
MatLUFactorSym         1 1.0 9.9645e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         1 1.0 4.3477e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  1  0  0  0  0   1  0  0  0  0     0
MatAssemblyBegin       2 1.0 8.7760e-02 8.1 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd         2 1.0 2.3864e-02 1.7 0.00e+00 0.0 8.8e+01 5.8e+02 1.6e+01  0  0  0  0  0   0  0  0  0  0     0
MatGetRowIJ            1 1.0 3.0994e-06 3.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 5.9128e-05 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLoad                2 1.0 2.1068e-01 1.0 0.00e+00 0.0 1.3e+02 1.7e+05 2.6e+01  0  0  0  2  0   0  0  0  2  0     0
VecNorm                4 1.0 2.6133e-03 2.3 1.44e+04 1.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  0   0  0  0  0  0    44
VecCopy             1202 1.0 2.3665e-03 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet              9605 1.0 2.6972e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                2 1.0 9.0599e-06 1.8 7.20e+03 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  6358
VecScatterBegin    28804 1.0 5.1131e+0020.6 0.00e+00 0.0 1.7e+05 5.5e+03 9.6e+03  0  0100 98 35   0  0100 98 35     0
VecScatterEnd      19204 1.0 3.1801e+0019.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
EPSSetUp               1 1.0 5.3490e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.1e+01  1  0  0  0  0   1  0  0  0  0     0
EPSSolve               1 1.0 6.4351e+02 1.0 2.59e+09 1.2 1.7e+05 5.5e+03 2.7e+04100100100 98100 100100100 98100    28
STSetUp                1 1.0 5.3477e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.3e+01  1  0  0  0  0   1  0  0  0  0     0
STApply             9600 1.0 6.3276e+02 1.0 3.84e+08 0.0 1.7e+05 5.5e+03 9.6e+03 98  3100 98 35  98  3100 98 35     1
STMatSolve          9600 1.0 6.3261e+02 1.0 0.00e+00 0.0 1.5e+05 5.3e+03 9.6e+03 98  0 89 85 35  98  0 89 85 35     0
BVCopy              1201 1.0 5.5201e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
BVMult             18857 1.0 6.8241e-01 1.1 1.34e+09 1.0 0.0e+00 0.0e+00 0.0e+00  0 59  0  0  0   0 59  0  0  0 15686
BVDot              17657 1.0 7.0668e+00 1.7 8.49e+08 1.0 0.0e+00 0.0e+00 1.8e+04  1 37  0  0 65   1 37  0  0 65   962
BVOrthogonalize     9601 1.0 7.5371e+00 1.6 1.64e+09 1.0 0.0e+00 0.0e+00 1.8e+04  1 72  0  0 65   1 72  0  0 65  1736
BVScale             9601 1.0 2.8411e-02 1.7 1.73e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  4866
BVSetRandom            1 1.0 1.6403e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DSSolve             1199 1.0 2.6696e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DSVectors           1202 1.0 9.8028e-03 1.5 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DSOther             1199 1.0 1.0596e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSetUp               1 1.0 1.1921e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve            9600 1.0 6.3237e+02 1.0 0.00e+00 0.0 1.5e+05 5.3e+03 9.6e+03 98  0 89 85 35  98  0 89 85 35     0
PCSetUp                1 1.0 5.3474e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01  1  0  0  0  0   1  0  0  0  0     0
PCApply             9600 1.0 6.3223e+02 1.0 0.00e+00 0.0 1.5e+05 5.3e+03 9.6e+03 98  0 89 85 35  98  0 89 85 35     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Viewer     3              2         1504     0
              Matrix    11             11      1063160     0
              Vector    23             23       336320     0
      Vector Scatter     4              4         4352     0
           Index Set    11             11        27628     0
Eigenvalue Problem Solver     1              1         2052     0
         PetscRandom     1              1          648     0
  Spectral Transform     1              1          840     0
       Basis Vectors     1              1         9656     0
              Region     1              1          648     0
       Direct solver     1              1        20600     0
       Krylov Solver     1              1         1160     0
      Preconditioner     1              1         1096     0
========================================================================================================================
Average time to get PetscTime(): 1.19209e-07
Average time for MPI_Barrier(): 0.000147009
Average time for zero size MPI_Send(): 8.14795e-05
#PETSc Option Table entries:
-eps_max_it 2000
-eps_nev 1
-f1 a2
-f2 b2
-log_summary
-mat_mumps_icntl_14 200
-st_ksp_type preonly
-st_pc_factor_mat_solver_package mumps
-st_pc_type lu
-st_type sinvert
#End of PETSc Option Table entries
Compiled with FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4
Configure options: PETSC_ARCH=linux-gnu --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=/cluster/share/apps/fblaslapack-3.4.2.tar.gz --download-mpich=/cluster/share/apps/mpich-3.1.tar.gz --download-mumps=/cluster/share/apps/MUMPS_4.10.0-p3.tar.gz --download-scalapack=/cluster/share/apps/scalapack-2.0.2.tgz --download-blacs=/cluster/share/apps/blacs-dev.tar.gz --download-parmetis=/cluster/share/apps/parmetis-4.0.2-p5.tar.gz --download-metis=/cluster/share/apps/metis-5.0.2-p3.tar.gz --download-cmake=/cluster/share/apps/cmake-2.8.12.2.tar.gz --with-scalar-type=complex --with-fortran-kernels=1 --with-large-file-io=1 --with-debugging=no
-----------------------------------------
Libraries compiled on Mon May 18 15:07:10 2015 on earth.ceas.iisc.ernet.in 
Machine characteristics: Linux-2.6.32-279.14.1.el6.x86_64-x86_64-with-centos-6.3-Final
Using PETSc directory: /cluster/share/venkatesh/petsc-3.5.3
Using PETSc arch: linux-gnu
-----------------------------------------

Using C compiler: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpicc  -fPIC -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpif90  -fPIC  -Wall -Wno-unused-variable -ffree-line-length-0 -O  ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/include -I/cluster/share/venkatesh/petsc-3.5.3/include -I/cluster/share/venkatesh/petsc-3.5.3/include -I/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/include
-----------------------------------------

Using C linker: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpicc
Using Fortran linker: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpif90
Using libraries: -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -lpetsc -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lflapack -lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -lmpichf90 -lgfortran -lm -lmpichcxx -lstdc++ -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -lmpichcxx -lstdc++ -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -ldl -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl  
-------------- next part --------------
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------

./ex7 on a linux-gnu named compute-0-0.local with 2 processors, by venkatesh Mon May 18 23:59:48 2015
Using Petsc Release Version 3.5.3, Jan, 31, 2015 

                         Max       Max/Min        Avg      Total 
Time (sec):           2.291e+02      1.00000   2.291e+02
Objects:              6.200e+01      1.03333   6.100e+01
Flops:                7.555e+09      1.06786   7.315e+09  1.463e+10
Flops/sec:            3.298e+07      1.06786   3.193e+07  6.386e+07
MPI Messages:         7.633e+03      1.00000   7.633e+03  1.527e+04
MPI Message Lengths:  2.313e+08      1.00000   3.030e+04  4.625e+08
MPI Reductions:       2.195e+04      1.00000

Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
                            e.g., VecAXPY() for real vectors of length N --> 2N flops
                            and VecAXPY() for complex vectors of length N --> 8N flops

Summary of Stages:   ----- Time ------  ----- Flops -----  --- Messages ---  -- Message Lengths --  -- Reductions --
                        Avg     %Total     Avg     %Total   counts   %Total     Avg         %Total   counts   %Total 
 0:      Main Stage: 2.2909e+02 100.0%  1.4629e+10 100.0%  1.527e+04 100.0%  3.030e+04      100.0%  2.195e+04 100.0% 

------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
   Count: number of times phase was executed
   Time and Flops: Max - maximum over all processors
                   Ratio - ratio of maximum to minimum over all processors
   Mess: number of messages sent
   Avg. len: average message length (bytes)
   Reduct: number of global reductions
   Global: entire computation
   Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
      %T - percent time in this phase         %F - percent flops in this phase
      %M - percent messages in this phase     %L - percent message lengths in this phase
      %R - percent reductions in this phase
   Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event                Count      Time (sec)     Flops                             --- Global ---  --- Stage ---   Total
                   Max Ratio  Max     Ratio   Max  Ratio  Mess   Avg len Reduct  %T %F %M %L %R  %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------

--- Event Stage 0: Main Stage

MatMult             7628 1.0 7.5923e-01 3.1 4.87e+0871.0 4.0e+00 2.4e+04 0.0e+00  0  3  0  0  0   0  3  0  0  0   650
MatSolve            7624 1.0 2.1264e+02 1.0 0.00e+00 0.0 1.5e+04 3.0e+04 7.6e+03 93  0100 98 35  93  0100 98 35     0
MatLUFactorSym         1 1.0 9.4827e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 5.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLUFactorNum         1 1.0 9.8847e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  4  0  0  0  0   4  0  0  0  0     0
MatAssemblyBegin       2 1.0 1.5272e-0268.8 0.00e+00 0.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatAssemblyEnd         2 1.0 1.6832e-02 1.0 0.00e+00 0.0 4.0e+00 3.0e+03 1.6e+01  0  0  0  0  0   0  0  0  0  0     0
MatGetRowIJ            1 1.0 2.8610e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatGetOrdering         1 1.0 1.1611e-04 2.6 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
MatLoad                2 1.0 1.1695e-01 1.0 0.00e+00 0.0 1.0e+01 8.6e+05 2.6e+01  0  0  0  2  0   0  0  0  2  0     0
VecNorm                4 1.0 8.0910e-0341.9 5.76e+04 1.0 0.0e+00 0.0e+00 4.0e+00  0  0  0  0  0   0  0  0  0  0    14
VecCopy              955 1.0 4.1749e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecSet              7629 1.0 6.4457e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
VecAXPY                2 1.0 1.8835e-05 1.1 2.88e+04 1.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0  3058
VecScatterBegin    22876 1.0 5.9344e-01 1.4 0.00e+00 0.0 1.5e+04 3.0e+04 7.6e+03  0  0100 98 35   0  0100 98 35     0
VecScatterEnd      15252 1.0 2.2672e-01 2.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
EPSSetUp               1 1.0 1.0835e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.1e+01  5  0  0  0  0   5  0  0  0  0     0
EPSSolve               1 1.0 2.2895e+02 1.0 7.54e+09 1.1 1.5e+04 3.0e+04 2.2e+04100100100 98100 100100100 98100    64
STSetUp                1 1.0 1.0834e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.3e+01  5  0  0  0  0   5  0  0  0  0     0
STApply             7624 1.0 2.1335e+02 1.0 4.76e+08 0.0 1.5e+04 3.0e+04 7.6e+03 93  3100 98 35  93  3100 98 35     2
STMatSolve          7624 1.0 2.1307e+02 1.0 0.00e+00 0.0 1.5e+04 3.0e+04 7.6e+03 93  0100 98 35  93  0100 98 35     0
BVCopy               954 1.0 7.0782e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
BVMult             15224 1.0 1.9998e+00 1.0 4.28e+09 1.0 0.0e+00 0.0e+00 0.0e+00  1 59  0  0  0   1 59  0  0  0  4281
BVDot              14271 1.0 2.2638e+00 1.0 2.73e+09 1.0 0.0e+00 0.0e+00 1.4e+04  1 37  0  0 65   1 37  0  0 65  2414
BVOrthogonalize     7625 1.0 3.4605e+00 1.0 5.26e+09 1.0 0.0e+00 0.0e+00 1.4e+04  2 72  0  0 65   2 72  0  0 65  3040
BVScale             7625 1.0 5.1506e-02 1.2 5.49e+07 1.0 0.0e+00 0.0e+00 0.0e+00  0  1  0  0  0   0  1  0  0  0  2132
BVSetRandom            1 1.0 1.8215e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DSSolve              952 1.0 2.1509e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DSVectors            955 1.0 8.1818e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
DSOther              952 1.0 8.5477e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSetUp               1 1.0 1.1921e-06 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00  0  0  0  0  0   0  0  0  0  0     0
KSPSolve            7624 1.0 2.1285e+02 1.0 0.00e+00 0.0 1.5e+04 3.0e+04 7.6e+03 93  0100 98 35  93  0100 98 35     0
PCSetUp                1 1.0 1.0834e+01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 1.1e+01  5  0  0  0  0   5  0  0  0  0     0
PCApply             7624 1.0 2.1266e+02 1.0 0.00e+00 0.0 1.5e+04 3.0e+04 7.6e+03 93  0100 98 35  93  0100 98 35     0
------------------------------------------------------------------------------------------------------------------------

Memory usage is given in bytes:

Object Type          Creations   Destructions     Memory  Descendants' Mem.
Reports information only for process 0.

--- Event Stage 0: Main Stage

              Viewer     3              2         1504     0
              Matrix    11             11     14200392     0
              Vector    23             23      1015008     0
      Vector Scatter     4              4        15536     0
           Index Set    13             13        48956     0
Eigenvalue Problem Solver     1              1         2052     0
         PetscRandom     1              1          648     0
  Spectral Transform     1              1          840     0
       Basis Vectors     1              1         9656     0
              Region     1              1          648     0
       Direct solver     1              1        20600     0
       Krylov Solver     1              1         1160     0
      Preconditioner     1              1         1096     0
========================================================================================================================
Average time to get PetscTime(): 9.53674e-08
Average time for MPI_Barrier(): 2.24113e-05
Average time for zero size MPI_Send(): 1.14441e-05
#PETSc Option Table entries:
-eps_max_it 2000
-eps_nev 1
-f1 a2
-f2 b2
-log_summary
-mat_mumps_icntl_14 85
-st_ksp_type preonly
-st_pc_factor_mat_solver_package mumps
-st_pc_type lu
-st_type sinvert
#End of PETSc Option Table entries
Compiled with FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 16 sizeof(PetscInt) 4
Configure options: PETSC_ARCH=linux-gnu --with-cc=gcc --with-cxx=g++ --with-fc=gfortran --download-fblaslapack=/cluster/share/apps/fblaslapack-3.4.2.tar.gz --download-mpich=/cluster/share/apps/mpich-3.1.tar.gz --download-mumps=/cluster/share/apps/MUMPS_4.10.0-p3.tar.gz --download-scalapack=/cluster/share/apps/scalapack-2.0.2.tgz --download-blacs=/cluster/share/apps/blacs-dev.tar.gz --download-parmetis=/cluster/share/apps/parmetis-4.0.2-p5.tar.gz --download-metis=/cluster/share/apps/metis-5.0.2-p3.tar.gz --download-cmake=/cluster/share/apps/cmake-2.8.12.2.tar.gz --with-scalar-type=complex --with-fortran-kernels=1 --with-large-file-io=1 --with-debugging=no
-----------------------------------------
Libraries compiled on Mon May 18 15:07:10 2015 on earth.ceas.iisc.ernet.in 
Machine characteristics: Linux-2.6.32-279.14.1.el6.x86_64-x86_64-with-centos-6.3-Final
Using PETSc directory: /cluster/share/venkatesh/petsc-3.5.3
Using PETSc arch: linux-gnu
-----------------------------------------

Using C compiler: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpicc  -fPIC -D_LARGEFILE64_SOURCE -D_FILE_OFFSET_BITS=64 -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O  ${COPTFLAGS} ${CFLAGS}
Using Fortran compiler: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpif90  -fPIC  -Wall -Wno-unused-variable -ffree-line-length-0 -O  ${FOPTFLAGS} ${FFLAGS} 
-----------------------------------------

Using include paths: -I/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/include -I/cluster/share/venkatesh/petsc-3.5.3/include -I/cluster/share/venkatesh/petsc-3.5.3/include -I/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/include
-----------------------------------------

Using C linker: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpicc
Using Fortran linker: /cluster/share/venkatesh/petsc-3.5.3/linux-gnu/bin/mpif90
Using libraries: -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -lpetsc -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lflapack -lfblas -lparmetis -lmetis -lX11 -lpthread -lssl -lcrypto -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -lmpichf90 -lgfortran -lm -lmpichcxx -lstdc++ -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -lmpichcxx -lstdc++ -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -L/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -L/usr/lib/gcc/x86_64-redhat-linux/4.4.6 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/compiler/lib/intel64 -Wl,-rpath,/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -L/opt/intel/composer_xe_2013.5.192/mkl/lib/intel64 -ldl -Wl,-rpath,/cluster/share/venkatesh/petsc-3.5.3/linux-gnu/lib -lmpich -lopa -lmpl -lrt -lpthread -lgcc_s -ldl  
-----------------------------------------


More information about the petsc-users mailing list