[petsc-users] How to use multigrid?
Jed Brown
jedbrown at mcs.anl.gov
Sat Nov 3 12:08:26 CDT 2012
1. What kind of equation are you solving? AMG is not working well if it
takes that many iterations.
2.
* ##########################################################*
* # #*
* # WARNING!!! #*
* # #*
* # This code was compiled with a debugging option, #*
* # To get timing results run ./configure #*
* # using --with-debugging=no, the performance will #*
* # be generally two or three times faster. #*
* # #*
* ##########################################################*
On Sat, Nov 3, 2012 at 12:05 PM, w_ang_temp <w_ang_temp at 163.com> wrote:
> (1) using AMG
> [0]PCSetData_AGG bs=1 MM=7601
> Linear solve converged due to CONVERGED_RTOL iterations 445
> Norm of error 0.2591E+04 iterations 445
> 0.000000000000000E+000 0.000000000000000E+000 0.000000000000000E+000
> -2.105776715959587E-017 0.000000000000000E+000 0.000000000000000E+000
> 26.4211453778391 -3.262172452839194E-017 -2.114490133288630E-017
>
> ************************************************************************************************************************
> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
> -fCourier9' to print this document ***
> ********************************************************************
> ****************************************************
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
> ./ex4f on a arch-linux2-c-debug named ubuntu with 4 processors, by ubu Sat
> Nov 3 09:29:28 2012
> Using Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
> Max Max/Min Avg Total
> Time (sec): 3.198e+02 1.00002 3.198e+02
> Objects: 4.480e+02 1.00000 4.480e+02
> Flops: 2.296e+09 1.08346 2.172e+09 8.689e+09
> Flops/sec: 7.181e+06 1.08344 6.792e+06 2.717e+07
> Memory: 2.374e+07 1.04179 9.297e+07
> MPI Messages: 6.843e+03 1.87582 5.472e+03 2.189e+04
> MPI Message Lengths: 2.660e+07 2.08884 3.446e+03 7.542e+07
> MPI Reductions: 6.002e+04 1.00000
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
> e.g., VecAXPY() for real vectors of length N
> --> 2N flops
> and VecAXPY() for complex vectors of length N
> --> 8N flops
> Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages
> --- -- Message Lengths -- -- Reductions --
> Avg %Total Avg %Total counts
> %Total Avg %Total counts %Total
> 0: Main Stage: 3.1981e+02 100.0% 8.6886e+09 100.0% 2.189e+04
> 100.0% 3.446e+03 100.0% 6.001e+04 100.0%
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
> &n bsp; Count: number of times phase was executed
> Time and Flops: Max - maximum over all processors
> Ratio - ratio of maximum to minimum over all processors
> Mess: number of messages sent
> Avg. len: average message length
> Reduct: number of global reductions
> Global: entire computation
> Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
> %T - percent time in this phase %f - percent flops in this
> phase
> %M - percent messages in this phase %L - percent message lengths
> in this phase
> %R - percent reductions in this phase
> Total Mflop/s: 10e-6 * (sum of flops o ver all processors)/(max time
> over all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
>
> ##########################################################
> # #
> # WARNING!!! #
> # & nbsp; #
> # This code was compiled with a debugging option, #
> # To get timing results run ./configure #
> # using --with-debugging=no, the performance will #
> # be generally two or three times faster. #
> # &n bsp; #
> ##########################################################
>
> Event Count Time (sec)
> Flops --- Global --- --- Stage --- Total
> Max Ratio Max Ratio Max Ratio Mess Avg len
> Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
> --- Event Stage 0: Main Stage
> MatMult 6291 1.0 2.1431e+01 4.2 1.01e+09 1.2 1.9e+04 3.4e+03
> 0.0e+00 4 42 86 85 0 4 42 86 85 0 170
> MatMultAdd 896 1.0 4.8204e-01 1.1 2.79e+07 1.2 1.3e+03 3.4e+03
> 0.0e+00 0 1 6 6 0 0 1 6 6 0 208
> MatMultTranspose 896 1.0 2.2052e+00 1.3 2.79e+07 1.2 1.3e+03 3.4e+03
> 1.8e+03 1 1 6 6 3 1 1 6 6 3 45
> MatSolve 896 0.0 1.4953e-02 0.0 2.44e+06 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 163
> MatLUFactorSym 1 1.0 1.7595e -04 4.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 5.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatLUFactorNum 1 1.0 1.3995e-0423.5 1.85e+04 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 132
> MatConvert 2 1.0 3.1026e-02 1.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 1.2e+01 0 0 0 0 0 0 0 0 0 0 0
> MatScale 6 1.0 8.9679e-03 5.7 3.98e+05 1.2 6.0e+00 3.4e+03
> 4.0e+00 0 0 0 0 0 0 0 0 0 0 158
> MatAssemblyBegin 37 1.0 2.1544e-01 1.7 0.00e+00 0.0 5.4e+01 6 .4e+03
> 4.2e+01 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyEnd 37 1.0 2.5336e-01 1.4 0.00e+00 0.0 9.0e+01 6.8e+02
> 3.1e+02 0 0 0 0 1 0 0 0 0 1 0
> MatGetRow 26874 1.2 9.8243e-02 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatGetRowIJ 1 0.0 2.5988e-05 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatGetOrdering 1 0.0 1.5616e-04 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 5.0e-01 0 0 0 0 0 0 0 0 0 0 0
> MatCoarsen 2 1.0 2.9671e-02 1.0 0.00e+00 0.0 2.4e+01 7.0e+03
> 3.8e+01 0 0 0 0 0 0 0 0 0 0 0
> MatAXPY 2 1.0 6.1393e-04 2.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatTranspose 2 1.0 1.8161e-01 1.1 0.00e+00 0.0 3.0e+01 7.1e+03
> 9.4e+01 0 0 0 0 0 0 0 0 0 0 0
> MatMatMult 2 1.0 1.1968e-01 1.0 3.31e+05 1.2 3.6e+01 1.7e+03
> 1.1e+02 0 0 0 0 0 0 0 0 0 0 10
> MatMatMultSym 2 1.0 9.8982e-02 1.0 0.00e+00 0.0 3.0e+01 1.4e+03
> 9.6e+01 0 0 0 0 0 0 0 0 0 0 0
> MatMatMultNum 2 1.0 2.1248e-02 1.1 3.31e+05 1.2 6.0e+00 3.4e+03
> 1.2e+01 0 0 0 0 0 0 0 0 0 0 56
> MatPtAP 2 1.0 1.7070e-01 1.1 2.36e+06 1.2 5.4e+01 3.3e+03
> 1.1e+02 0 0 0 0 0 0 0 0 0 0 50
> MatPtAPSymbolic 2 1.0 1.3786e-01 1.1 0.00e+00 0.0 4.8e+01 3.1e+03
> 1.0 e+02 0 0 0 0 0 0 0 0 0 0 0
> MatPtAPNumeric 2 1.0 4.7638e-02 2.2 2.36e+06 1.2 6.0e+00 4.8e+03
> 1.2e+01 0 0 0 0 0 0 0 0 0 0 180
> MatTrnMatMult 2 1.0 7.9914e-01 1.0 1.14e+07 1.3 3.6e+01 2.1e+04
> 1.2e+02 0 0 0 1 0 0 0 0 1 0 48
> MatGetLocalMat 10 1.0 6.7852e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.4e+01 0 0 0 0 0 0 0 0 0 0 0
> MatGetBrAoCol 6 1.0 3.6962e-02 2.4 0.00e+00 0.0 4.2e+01 4.5e+03
> 1.6e+01 0 0 0 0 0   ; 0 0 0 0 0 0
> MatGetSymTrans 4 1.0 4.4394e-04 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecMDot 913 1.0 2.8472e+00 3.0 5.28e+08 1.0 0.0e+00 0.0e+00
> 9.1e+02 1 24 0 0 2 1 24 0 0 2 741
> VecNorm 1367 1.0 2.7202e+00 1.6 7.12e+06 1.0 0.0e+00 0.0e+00
> 1.4e+03 1 0 0 0 2 1 0 0 0 2 10
> VecScale 4950 1.0 6.0693e-02 1.2 1.96e+07 1.1 0.0e+00 0.0e+00
> 0.0e+00 0 1 0 0 0 0 1 0 0 0 1169
> VecCopy 1349 1.0 7.8685e-03 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecSet 4972 1.0 1.2852e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAXPY 7624 1.0 1.3610e-01 1.1 6.44e+07 1.2 0.0e+00 0.0e+00
> 0.0e+00 0 3 0 0 0 0 3 0 0 0 1676
> VecAYPX 7168 1.0 1.5877e-01 1.2 4.01e+07 1.2 0.0e+00 0.0e+00
> 0.0e+00 0 2 0 0 0   ; 0 2 0 0 0 896
> VecMAXPY 1366 1.0 6.3739e-01 1.3 5.35e+08 1.0 0.0e+00 0.0e+00
> 0.0e+00 0 25 0 0 0 0 25 0 0 0 3353
> VecAssemblyBegin 21 1.0 4.7891e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+01 0 0 0 0 0 0 0 0 0 0 0
> VecAssemblyEnd 21 1.0 4.5776e-05 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecPointwiseMult 5398 1.0 1.2278e-01 1.3 2.42e+07 1.2 0.0e+00 0.0e+00
> 0.0e+00 0 1 0 0 0 0 1 0 0 0 698
> VecScatterBegin 8107 1.0 3.9436 e-02 1.3 0.00e+00 0.0 2.2e+04 3.4e+03
> 0.0e+00 0 0 99 98 0 0 0 99 98 0 0
> VecScatterEnd 8107 1.0 1.5414e+01344.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 2 0 0 0 0 2 0 0 0 0 0
> VecSetRandom 2 1.0 1.1868e-03 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecNormalize 1366 1.0 2.7266e+00 1.6 1.07e+07 1.0 0.0e+00 0.0e+00
> 1.4e+03 1 0 0 0 2 1 0 0 0 2 15
> KSPGMRESOrthog 913 1.0 8.5743e+00 1.1 1.06e+09 1.0 0.0e+00 0.0e+00
> 3.6e+04 3 49 0 0 60 &nb sp; 3 49 0 0 60 492
> KSPSetUp 7 1.0 2.4805e-02 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 6.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSolve 1 1.0 5.7656e+01 1.0 2.30e+09 1.1 2.2e+04 3.4e+03
> 6.0e+04 18100100100100 18100100100100 151
> PCSetUp 2 1.0 2.1524e+00 1.0 2.03e+07 1.3 3.8e+02 6.0e+03
> 1.1e+03 1 1 2 3 2 1 1 2 3 2 33
> PCSetUpOnBlocks 448 1.0 1.5607e-03 1.8 1.85e+04 0.0 0.0e+00 0.0e+00
> 8.0e+00 0 0 0 0 0 0 0 0 0 0 & nbsp; 12
> PCApply 448 1.0 4.4331e+01 1.1 1.08e+09 1.2 1.9e+04 3.4e+03
> 2.3e+04 14 44 86 85 38 14 44 86 85 38 87
> PCGAMGgraph_AGG 2 1.0 6.1282e-01 1.0 3.36e+05 1.2 6.6e+01 5.7e+03
> 1.9e+02 0 0 0 0 0 0 0 0 0 0 2
> PCGAMGcoarse_AGG 2 1.0 8.8854e-01 1.0 1.14e+07 1.3 9.0e+01 1.2e+04
> 2.1e+02 0 0 0 1 0 0 0 0 1 0 44
> PCGAMGProl_AGG 2 1.0 6.3711e-02 1.1 0.00e+00 0.0 4.2e+01 2.3e+03
> 1.0e+02 0 0 0 0 0 0 0 0 0 0 0
> PCGAMGPOpt_AGG 2 1.0 3.4247e-01 1.0 6.22e+06 1.2 9.6e+01 2.8e+03
> 3.3e+02 0 0 0 0 1 0 0 0 0 1 65
>
> ------------------------------------------------------------------------------------------------------------------------
> Memory usage is given in bytes:
> Object Type Creations Destructions Memory Descendants' Mem.
> Reports information only for process 0.
> --- Event Stage 0: Main Stage
> Matrix 68 68 27143956 0
> Matrix Coarsen 2 2 704 0
> Vector 296 296 13385864 0
> Vector Scatter 18 18 11304 0
> & nbsp; Index Set 47 47 34816 0
> Krylov Solver 7 7 554688 0
> Preconditioner 7 7 3896 0
> PetscRandom 2 2 704 0
>   ; Viewer 1 0 0 0
>
> ========================================================================================================================
> Average time to get PetscTime(): 1.3113e-06
> Average time for MPI_Barrier(): 9.62257e-05
> Average time for zero size MPI_Send(): 0.00019449
> #PETSc Option Table entries:
> -ksp_converged_reason
> -ksp_gmres_restart 170
> -ksp_rtol 1.0e-15
> -ksp_type gmres
> -log_summary
> -pc_gamg_agg_nsmooths 1
> -pc_type gamg
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure run at: Thu Nov 1 05:54:48 2012
> Configure options: --with-mpi-dir=/home/ub u/soft/mpich2/
> --download-f-blas-lapack =1
> -----------------------------------------
> Libraries compiled on Thu Nov 1 05:54:48 2012 on ubuntu
> Machine characteristics:
> Linux-2.6.32-38-generic-i686-with-Ubuntu-10.04-lucid
> Using PETSc directory: /home/ubu/soft/petsc/petsc-3.3-p4
> Using PETSc arch: arch-linux2-c-debug
> -----------------------------------------
> Using C compiler: /home/ubu/soft/mpich2/bin/mpicc -wd1572 -g
> ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler: /home/ubu/soft/mpich2/bin/mpif90 -g
> ${FOPTFLAGS} ${FFLAGS}
> -----------------------------------------
> Using include paths:
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/mpich2/include
> -----------------------------------------
> Using C linker: /home/ubu/soft/mpich2/bin/mpicc
> Using Fortran linker: /home/ubu/soft/mpich2/bin/mpif90
> Using libraries:
> -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lpetsc
> -lpthread
> -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lflapack
> -lfblas -L/home/ubu/soft/mpich2/lib
> -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/ia32/cc4.1.0_libc2.4_kernel2.6.16.21
> -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32
> -L/usr/lib/gcc/i486-linux-gnu/4.4.3 -L/usr/lib/i486-linux-gnu -lmpichf90
> -lifport -lifcore -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -limf
> -lsvml -lipgo -ldecimal -lci lkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
> -----------------------------------------
> time 14.4289010000000
> time 14.4289020000000
> time 14.4449030000000
> time 14.4809050000000
>
>
> (2) using asm
> Linear solve converged due to CONVERGED_RTOL iterations 483
> Norm of error 0.2591E+04 iterations 483
> 0.000000000000000E+000 0.000000000000000E+000 0.000000000000000E+000
> 4.866092420969481E-018 0.000000000000000E+000 0.000000000000000E+000
> 26.4211453778395 -4.861214483821431E-017 5.379151535696287E-018
>
> ************************************************************************************************************************
> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
> -fCourier9' to print this document ***
>
> ************************************************************************************************************************
> ---------------------------------------------- PETSc Performance Summary:
> ----------------------------------------------
> ./ex4f on a arch-linux2-c-debug named ubuntu with 4 processors, by ubu Sat
> Nov 3 10:00:43 2012
> Using Petsc Release Version 3.3.0, Patch 4, Fri Oct 26 10:46:51 CDT 2012
> Max Max/Min Avg Total
> Time (sec): 2.952e+02 1.00006 2.952e+02
> Objects: 2.040e+02 1.00000 2.040e+02
> Flops: 1.502e+09 1.00731 1.496e+09 5.983e+09
> Flops/sec: 5.088e+06 1.00734 5.067e+06 2.027e+07
> Memory: 2.036e+07 1.01697 8.073e+07
> MPI Messages: 1.960e+03 2.00000 1.470e+03 5.880e+03
> MPI Message Lengths: 7.738e+06 3.12820 3.474e+03 2.042e+07
> MPI Reductions: 4.236e+04 1.00000
> Flop counting convention: 1 flop = 1 real number operation of type
> (multiply/divide/add/subtract)
> e.g., VecAXPY() for real vectors of length N
> --> 2N flops
> and VecAXPY() for complex vectors of length N
> --> 8N flops
> Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages
> --- -- Message Lengths -- -- Reductions --
> Avg %Total Avg %Total counts
> %Total Avg %Total counts %Total
> 0: Main Stage: 2.9517e+02 100.0% 5.9826e+09 100.0% 5.880e+03
> 100.0% 3.474e+03 100.0% 4.236e+04 100.0%
>
> ------------------------------------------------------------------------------------------------------------------------
> See the 'Profiling' chapter of the users' manual for details on
> interpreting output.
> Phase summary info:
> Count: number of times phase was executed
> Time and Flops: Max - maximum over all processors
> Ratio - ratio of maximum to minimum over all processors
> Mess: number of messages sent
> Avg. len: average message length
> Reduct: number of global reductions
> Global: entire computation
> Stage: stages of a computation. Set stages with PetscLogStagePush() and
> PetscLogStagePop().
> %T - percent time in this phase %f - percent flops in this
> phase
> & nbsp; %M - percent messages in this phase %L - percent message
> lengths in this phase
> %R - percent reductions in this phase
> Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time
> over all processors)
>
> ------------------------------------------------------------------------------------------------------------------------
>
> ##########################################################
> # #
> # WARNING!!! #
> # & nbsp; #
> # This code was compiled with a debugging option, #
> # To get timing results run ./configure #
> # using --with-debugging=no, the performance will #
> # be generally two or three times faster. #
> # &n bsp; #
> ##########################################################
>
> Event Count Time (sec)
> Flops --- Global --- --- Stage --- Total
> Max Ratio Max Ratio Max Ratio Mess Avg len
> Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s
>
> ------------------------------------------------------------------------------------------------------------------------
> --- Event Stage 0: Main Stage
> MatMult 485 1.0 3.1170e+00 5.9 1.36e+08 1.0 2.9e+03 3.4e+03
> 0.0e+00 1 9 49 48 0 1 9 49 48 0 173
> MatSolve 486 1.0 6.8313e-01 1.3 1.49e+08 1.1 0.0e+00 0.0e+00
> 0.0e+00 0 10 0 0 0 0 10 0 0 0 842
> MatLUFactorNum 1 1.0 6.0117e-02 1.2 1.54e+06 1.1 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 99
> MatILUFactorSym 1 1.0 1.5973e-01 2.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 1.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyBegin 2 1 .0 6.0572e-02 9.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatAssemblyEnd 2 1.0 1.7764e-02 1.6 0.00e+00 0.0 1.2e+01 8.5e+02
> 1.9e+01 0 0 0 0 0 0 0 0 0 0 0
> MatGetRowIJ 1 1.0 3.8147e-06 1.3 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatGetSubMatrice 1 1.0 1.9829e-01 2.1 0.00e+00 0.0 3.0e+01 2.1e+04
> 1.0e+01 0 0 1 3 0 0 0 1 3 0 0
> MatGetOrdering 1 1.0 1.2739e-02 5.8 0.00e+00 0.0 0.0e+00 0 .0e+00
> 4.0e+00 0 0 0 0 0 0 0 0 0 0 0
> MatIncreaseOvrlp 1 1.0 1.8877e-02 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 2.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecMDot 483 1.0 4.5957e+00 2.5 5.98e+08 1.0 0.0e+00 0.0e+00
> 4.8e+02 1 40 0 0 1 1 40 0 0 1 521
> VecNorm 487 1.0 2.0843e+00 1.2 7.40e+06 1.0 0.0e+00 0.0e+00
> 4.9e+02 1 0 0 0 1 1 0 0 0 1 14
> VecScale 486 1.0 1.2140e-02 1.1 3.69e+06 1.0 0.0e+00 0.0e+00
> 0.0 e+00 0 0 0 0 0 0 0 0 0 0 1217
> VecCopy 3 1.0 4.2915e-05 1.2 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecSet 979 1.0 8.1432e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAXPY 6 1.0 2.3413e-04 1.3 9.12e+04 1.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 1558
> VecMAXPY 486 1.0 7. 0027e-01 1.2 6.06e+08 1.0 0.0e+00 0.0e+00
> 0.0e+00 0 41 0 0 0 0 41 0 0 0 3460
> VecAssemblyBegin 1 1.0 5.7101e-03 2.5 0.00e+00 0.0 0.0e+00 0.0e+00
> 3.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecAssemblyEnd 1 1.0 3.0994e-06 1.6 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> VecScatterBegin 1457 1.0 9.1357e-02 2.6 0.00e+00 0.0 5.8e+03 3.4e+03
> 0.0e+00 0 0 99 97 0 0 0 99 97 0 0
> VecScatterEnd 1457 1.0 2.5327e+00323.7 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0&n bsp; 0 0
> VecNormalize 486 1.0 2.0858e+00 1.2 1.11e+07 1.0 0.0e+00 0.0e+00
> 4.9e+02 1 1 0 0 1 1 1 0 0 1 21
> KSPGMRESOrthog 483 1.0 1.1152e+01 1.2 1.20e+09 1.0 0.0e+00 0.0e+00
> 4.0e+04 3 80 0 0 94 3 80 0 0 94 429
> KSPSetUp 2 1.0 1.1989e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
> KSPSolve 1 1.0 2.0990e+01 1.0 1.50e+09 1.0 5.9e+03 3.5e+03
> 4.2e+04 7100100100100 7100100100100 285
> PCSetUp & nbsp; 2 1.0 3.5928e-01 1.0 1.54e+06 1.1 4.2e+01
> 1.5e+04 3.5e+01 0 0 1 3 0 0 0 1 3 0 17
> PCSetUpOnBlocks 1 1.0 2.2166e-01 1.7 1.54e+06 1.1 0.0e+00 0.0e+00
> 6.0e+00 0 0 0 0 0 0 0 0 0 0 27
> PCApply 486 1.0 5.9831e+00 2.8 1.49e+08 1.1 2.9e+03 3.4e+03
> 1.5e+03 1 10 50 48 3 1 10 50 48 3 96
>
> ------------------------------------------------------------------------------------------------------------------------
> Memory usage is given in bytes:
> Object Type Creations Destructions Memory Descendants' Mem.
> Reports information only for process 0.
> --- Event Stage 0: Main Stage
> Matrix 5 5 8051336 0
> Vector 182 182 11109304 0
> Vector Scatter 2 2 1256 0
> Index Set 10 10 121444 0
> & nbsp; Krylov Solver 2 2 476844 0
> Preconditioner 2 2 1088 0
> Viewer 1 0 0 0
>
> ========================================================================================================================
> Average time to get PetscTime(): 9.05991e-07
> Average time for MPI_Barrier(): 0.000297785
> Average time for zero size MPI_Send() : 0.000174284
> #PETSc Option Table entries:
> -ksp_converged_reason
> -ksp_gmres_restart 170
> -ksp_rtol 1.0e-15
> -ksp_type gmres
> -log_summary
> -pc_type asm
> #End of PETSc Option Table entries
> Compiled without FORTRAN kernels
> Compiled with full precision matrices (default)
> sizeof(short) 2 sizeof(int) 4 sizeof(long) 4 sizeof(void*) 4
> sizeof(PetscScalar) 8 sizeof(PetscInt) 4
> Configure run at: Thu Nov 1 05:54:48 2012
> Configure options: --with-mpi-dir=/home/ubu/soft/mpich2/
> --download-f-blas-lapack =1
> -----------------------------------------
> Libraries compiled on Thu Nov 1 05:54:48 2012 on ubuntu
> Machine characteristics:
> Linux-2.6.32-38-generic-i686-with-Ubuntu-10.04-lucid
> Using PETSc directory: /home/ubu/soft/petsc/petsc-3.3-p4
> Using PETSc arch: arch-linux2-c-debug
> -----------------------------------------
> Using C compiler: /home/ubu/soft/mpich2/bin/mpicc -wd1572 -g
> ${COPTFLAGS} ${CFLAGS}
> Using Fortran compiler: /home/ubu/soft/mpich2/bin/mpif90 -g
> ${FOPTFLAGS} ${FFLAGS}
> -----------------------------------------
> Using include paths:
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/include
> -I/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/include
> -I/home/ubu/soft/mpich2/include
> -----------------------------------------
> Using C linker: /home/ubu/soft/mpich2/bin/mpicc
> Using Fortran linker: /home/ubu/soft/mpich2/bin/mpif90
> Using libraries:
> -Wl,-rpath,/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lpetsc
> -lpthread -Wl,-rpath,/home/u
> bu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib
> -L/home/ubu/soft/petsc/petsc-3.3-p4/arch-linux2-c-debug/lib -lflapack
> -lfblas -L/home/ubu/soft/mpich2/lib
> -L/opt/intel/composer_xe_2011_sp1.10.319/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/ipp/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/mkl/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.10.319/tbb/lib/ia32/cc4.1.0_libc2.4_kernel2.6.16.21
> -L/opt/intel/composer_xe_2011_sp1.9.293/compiler/lib/ia32
> -L/opt/intel/composer_xe_2011_sp1.9.293/mkl/lib/ia32
> -L/usr/lib/gcc/i486-linux-gnu/4.4.3 -L/usr/lib/i486-linux-gnu -lmpichf90
> -lifport -lifcore -lm -lm -ldl -lmpich -lopa -lmpl -lrt -lpthread -limf
> -lsvml -lipgo -ldecimal -lcilkrts -lstdc++ -lgcc_s -lirc -lirc_s -ldl
> -----------------------------------------
> time 5.72435800000000
> time 5.40433800000000
> time 5.39233700000000
> time 5.51634499999999
>
>
>
>
> >At 2012-11-03 23:53:42,"Jed Brown" <jedbrown at mcs.anl.gov> wrote:
>
> >Just pass it as a command line option. It gives profiling output in
> PetscFinalize().
>
>
> >On Sat, Nov 3, 2012 at 10:52 AM, w_ang_temp <w_ang_temp at 163.com> wrote:
>
>> >Is there something that need attention when setting up PETSc? The
>> -log_summary
>> >is no use in my system.
>>
>>
>> >At 2012-11-03 23:31:52,"Jed Brown" <jedbrown at mcs.anl.gov> wrote:
>>
>> >1. *Always* send -log_summary when asking about performance.
>> >2. AMG setup costs more, the solve should be faster, especially for
>> large problems.
>> >3. 30k degrees of freedom is not large.
>>
>>
>> >>On Sat, Nov 3, 2012 at 10:27 AM, w_ang_temp <w_ang_temp at 163.com>wrote:
>>
>>> >>Hello,
>>> >> I have tried AMG, but there are some problems. I use the command:
>>> >> mpiexec -n 4 ./ex4f -ksp_type gmres -pc_type gamg
>>> -pc_gamg_agg_nsmooths 1 -ksp_gmres_restart 170 -ksp_rtol 1.0e-15
>>> ->>ksp_converged_reason.
>>> >> The matrix has a size of 30000. However, compared with -pc_type
>>> asm,
>>> >>the amg need more time:asm needs 4.9s, amg needs 13.7s. I did several
>>> tests
>>> >>and got the same conclusion. When it begins, the screen shows the
>>> information:
>>> >>[0]PCSetData_AGG bs=1 MM=7601. I do not know the meaning. And if there
>>> is some
>>> >>parameters that affect the performance of AMG?
>>> >> Besides, I want to confirm a conception. In my view, AMG itself
>>> can be a solver
>>> >>like gmres. It can also be used as a preconditioner like jacobi and is
>>> used by combining
>>> >>with other solver. Is it right? If it is right, how use AMG solver?
>>> >> My codes are attached.
>>> >> Thanks.
>>> >> Jim
>>>
>>>
>>>
>>> >At 2012-11-01 22:00:28,"Jed Brown" <jedbrown at mcs.anl.gov> wrote:
>>>
>>> >Yes, it's faster to understand this error message than to have
>>> "mysteriously slow performance".
>>>
>>> >** Preallocation routines now automatically set
>>> MAT_NEW_NONZERO_ALLOCATION_ERR, if you intentionally preallocate less than
>>> necessary then *>*use
>>> MatSetOption(mat,MAT_NEW_NONZERO_ALLOCATION_ERR,PETSC_FALSE) to disable the
>>> error generation.
>>> *
>>> >http://www.mcs.anl.gov/petsc/documentation/changes/33.html
>>>
>>> >On Thu, Nov 1, 2012 at 8:57 AM, w_ang_temp <w_ang_temp at 163.com> wrote:
>>>
>>>> >Do you mean that the two versions have a different in this point? If I
>>>> use the new version, I have to
>>>> >make some modifications on my codes?
>>>>
>>>
>>>
>>>
>>>
>>
>>
>>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20121103/49ef8f87/attachment-0001.html>
More information about the petsc-users
mailing list