[petsc-dev] [petsc-maint #87339] Re: ex19 on GPU
Shiyuan
gshy2014 at gmail.com
Sun Sep 18 10:29:17 CDT 2011
On Sat, Sep 17, 2011 at 10:48 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> Run the first one with -da_vec_type seqcusp and -da_mat_type seqaijcusp
>
> > VecScatterBegin 2097 1.0 1.0270e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 5 0 0 0 0 7 0 0 0 0 0
> > VecCUSPCopyTo 2140 1.0 2.4991e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 1 0 0 0 0 2 0 0 0 0 0
> > VecCUSPCopyFrom 2135 1.0 1.0437e+00 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
> 0.0e+00 5 0 0 0 0 7 0 0 0 0 0
>
> Why is it doing all these vector copy ups and downs? It is run on one
> process it shouldn't be doing more than a handful total.
>
> Barry
>
> ./ex19 -da_vec_type seqcusp -da_mat_type seqaijcusp -pc_type none
-dmmg_nlevels 1 -da_grid_x 100 -da_grid_y 100 -log_summary -mat_no_inode
-preload off -cusp_synchronize -cuda_set_device 0 | tee ex19p2.txt
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages ---
-- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts
%Total Avg %Total counts %Total
0: Main Stage: 4.2393e+00 24.4% 0.0000e+00 0.0% 0.000e+00 0.0%
0.000e+00 0.0% 0.000e+00 0.0%
1: SetUp: 4.9079e-02 0.3% 0.0000e+00 0.0% 0.000e+00 0.0%
0.000e+00 0.0% 0.000e+00 0.0%
2: Solve: 1.3071e+01 75.3% 8.8712e+09 100.0% 0.000e+00 0.0%
0.000e+00 0.0% 0.000e+00 0.0%
------------------------------------------------------------------------------------------------------------------------
VecScatterBegin 5 1.0 1.5609e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecReduceArith 2 1.0 3.8650e-03 1.0 1.60e+05 1.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 41
VecReduceComm 1 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecCUSPCopyTo 49 1.0 3.0950e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecCUSPCopyFrom 44 1.0 2.0876e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00
0.0e+00 0 0 0 0 0 0 0 0 0 0 0
The complete log is attached. Thanks.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20110918/0476ca86/attachment.html>
-------------- next part --------------
lid velocity = 0.0001, prandtl # = 1, grashof # = 1
Number of SNES iterations = 2
************************************************************************************************************************
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***
************************************************************************************************************************
---------------------------------------------- PETSc Performance Summary: ----------------------------------------------
./ex19 on a gpu00CCT- named gpu00.cct.lsu.edu with 1 processor, by sgu Sun Sep 18 10:13:16 2011
Using Petsc Development HG revision: 94fea4d40b1fcca2e886a14e7fdb916b8f6fecf3 HG Date: Sat Sep 17 00:48:29 2011 -0500
Max Max/Min Avg Total
Time (sec): 1.736e+01 1.00000 1.736e+01
Objects: 1.250e+02 1.00000 1.250e+02
Flops: 8.871e+09 1.00000 8.871e+09 8.871e+09
Flops/sec: 5.110e+08 1.00000 5.110e+08 5.110e+08
MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00
MPI Reductions: 0.000e+00 0.00000
Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)
e.g., VecAXPY() for real vectors of length N --> 2N flops
and VecAXPY() for complex vectors of length N --> 8N flops
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --
Avg %Total Avg %Total counts %Total Avg %Total counts %Total
0: Main Stage: 4.2393e+00 24.4% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
1: SetUp: 4.9079e-02 0.3% 0.0000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
2: Solve: 1.3071e+01 75.3% 8.8712e+09 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%
------------------------------------------------------------------------------------------------------------------------
See the 'Profiling' chapter of the users' manual for details on interpreting output.
Phase summary info:
Count: number of times phase was executed
Time and Flops: Max - maximum over all processors
Ratio - ratio of maximum to minimum over all processors
Mess: number of messages sent
Avg. len: average message length
Reduct: number of global reductions
Global: entire computation
Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().
%T - percent time in this phase %F - percent flops in this phase
%M - percent messages in this phase %L - percent message lengths in this phase
%R - percent reductions in this phase
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)
------------------------------------------------------------------------------------------------------------------------
Event Count Time (sec) Flops --- Global --- --- Stage --- Total
Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s
------------------------------------------------------------------------------------------------------------------------
--- Event Stage 0: Main Stage
PetscBarrier 1 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
--- Event Stage 1: SetUp
MatAssemblyBegin 1 1.0 1.1921e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 1 1.0 1.4529e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 3 0 0 0 0 0
MatFDColorCreate 1 1.0 1.7005e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 35 0 0 0 0 0
--- Event Stage 2: Solve
VecDot 2 1.0 1.7049e-03 1.0 1.60e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 94
VecMDot 2024 1.0 8.6273e+00 1.0 2.54e+09 1.0 0.0e+00 0.0e+00 0.0e+00 50 29 0 0 0 66 29 0 0 0 295
VecNorm 2096 1.0 1.5544e+00 1.0 1.68e+08 1.0 0.0e+00 0.0e+00 0.0e+00 9 2 0 0 0 12 2 0 0 0 108
VecScale 2092 1.0 3.7774e-01 1.0 8.37e+07 1.0 0.0e+00 0.0e+00 0.0e+00 2 1 0 0 0 3 1 0 0 0 222
VecCopy 2072 1.0 3.8258e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 3 0 0 0 0 0
VecSet 70 1.0 1.3119e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecAXPY 108 1.0 4.7407e-02 1.0 8.64e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 182
VecWAXPY 68 1.0 1.2545e-02 1.0 2.72e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 217
VecMAXPY 2092 1.0 6.4464e-01 1.0 2.71e+09 1.0 0.0e+00 0.0e+00 0.0e+00 4 31 0 0 0 5 31 0 0 0 4198
VecScatterBegin 5 1.0 1.5609e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecReduceArith 2 1.0 3.8650e-03 1.0 1.60e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 41
VecReduceComm 1 1.0 0.0000e+00 0.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecCUSPCopyTo 49 1.0 3.0950e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
VecCUSPCopyFrom 44 1.0 2.0876e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
SNESSolve 1 1.0 1.3044e+01 1.0 8.87e+09 1.0 0.0e+00 0.0e+00 0.0e+00 75100 0 0 0 100100 0 0 0 680
SNESLineSearch 2 1.0 1.1921e-02 1.0 5.49e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 461
SNESFunctionEval 3 1.0 2.7192e-03 1.0 2.52e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 927
SNESJacobianEval 2 1.0 2.0424e-01 1.0 3.85e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 2 0 0 0 0 188
KSPGMRESOrthog 2024 1.0 9.2522e+00 1.0 5.09e+09 1.0 0.0e+00 0.0e+00 0.0e+00 53 57 0 0 0 71 57 0 0 0 550
KSPSetup 2 1.0 5.1975e-05 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
KSPSolve 2 1.0 1.2819e+01 1.0 8.83e+09 1.0 0.0e+00 0.0e+00 0.0e+00 74 99 0 0 0 98 99 0 0 0 689
PCSetUp 2 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
PCApply 2024 1.0 3.8054e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 2 0 0 0 0 3 0 0 0 0 0
MatMult 2092 1.0 1.1950e+00 1.0 3.32e+09 1.0 0.0e+00 0.0e+00 0.0e+00 7 37 0 0 0 9 37 0 0 0 2779
MatAssemblyBegin 2 1.0 1.9073e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatAssemblyEnd 2 1.0 2.6691e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatZeroEntries 2 1.0 1.9610e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
MatFDColorApply 2 1.0 2.0417e-01 1.0 3.85e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 2 0 0 0 0 188
MatFDColorFunc 42 1.0 1.2692e-02 1.0 3.53e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2780
MatCUSPCopyTo 2 1.0 1.5537e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0
------------------------------------------------------------------------------------------------------------------------
Memory usage is given in bytes:
Object Type Creations Destructions Memory Descendants' Mem.
Reports information only for process 0.
--- Event Stage 0: Main Stage
--- Event Stage 1: SetUp
Distributed Mesh 1 0 0 0
Vector 9 2 2928 0
Vector Scatter 3 0 0 0
Index Set 27 7 45136 0
IS L to G Mapping 3 0 0 0
SNES 1 0 0 0
Krylov Solver 2 1 1064 0
Preconditioner 2 1 752 0
Matrix 1 0 0 0
Matrix FD Coloring 1 0 0 0
--- Event Stage 2: Solve
Distributed Mesh 0 1 204840 0
Vector 74 81 3636056 0
Vector Scatter 0 3 1836 0
Index Set 0 20 174720 0
IS L to G Mapping 0 3 161668 0
SNES 0 1 1288 0
Krylov Solver 0 1 18864 0
Preconditioner 0 1 952 0
Matrix 0 1 10165692 0
Matrix FD Coloring 0 1 708 0
Viewer 1 0 0 0
========================================================================================================================
Average time to get PetscTime(): 0
#PETSc Option Table entries:
-cuda_set_device 0
-cusp_synchronize
-da_grid_x 100
-da_grid_y 100
-da_mat_type seqaijcusp
-da_vec_type seqcusp
-dmmg_nlevels 1
-log_summary
-mat_no_inode
-pc_type none
-preload off
#End of PETSc Option Table entries
Compiled without FORTRAN kernels
Compiled with full precision matrices (default)
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8
Configure run at: Sat Sep 17 11:25:49 2011
Configure options: PETSC_DIR=/home/sgu/softwares/petsc-dev PETSC_ARCH=gpu00CCT-cxx-nompi-release -with-clanguage=cxx --with-mpi=0 --download-f2cblaslapack=1 --download-f-blas-lapack=1 --with-debugging=0 --with-c2html=0 --with-valgrind-dir=~/softwares/valgrind --with-cuda=1 --with-cusp=1 --with-thrust=1 --with-cuda-arch=sm_20
-----------------------------------------
Libraries compiled on Sat Sep 17 11:25:49 2011 on gpu00.cct.lsu.edu
Machine characteristics: Linux-2.6.32-131.6.1.el6.x86_64-x86_64-with-redhat-6.1-Santiago
Using PETSc directory: /home/sgu/softwares/petsc-dev
Using PETSc arch: gpu00CCT-cxx-nompi-release
-----------------------------------------
Using C compiler: g++ -Wall -Wwrite-strings -Wno-strict-aliasing -Wno-unknown-pragmas -O ${COPTFLAGS} ${CFLAGS}
-----------------------------------------
Using include paths: -I/home/sgu/softwares/petsc-dev/gpu00CCT-cxx-nompi-release/include -I/home/sgu/softwares/petsc-dev/include -I/home/sgu/softwares/petsc-dev/include -I/home/sgu/softwares/petsc-dev/gpu00CCT-cxx-nompi-release/include -I/home/sgu/softwares/valgrind/include -I/usr/local/cuda/include -I/home/sgu/softwares/petsc-dev/include/mpiuni
-----------------------------------------
Using C linker: g++
Using libraries: -Wl,-rpath,/home/sgu/softwares/petsc-dev/gpu00CCT-cxx-nompi-release/lib -L/home/sgu/softwares/petsc-dev/gpu00CCT-cxx-nompi-release/lib -lpetsc -lX11 -lpthread -Wl,-rpath,/usr/local/cuda/lib64 -L/usr/local/cuda/lib64 -lcufft -lcublas -lcudart -Wl,-rpath,/home/sgu/softwares/petsc-dev/gpu00CCT-cxx-nompi-release/lib -L/home/sgu/softwares/petsc-dev/gpu00CCT-cxx-nompi-release/lib -lf2clapack -lf2cblas -lm -lm -lstdc++ -ldl
-----------------------------------------
More information about the petsc-dev
mailing list