<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Hi,<br>
<br>
I've email my school super computing staff and they told me that the
queue which I'm using is one meant for testing, hence, it's handling of
work load is not good. I've sent my job to another queue and it's run
on 4 processors. It's my own code because there seems to be something
wrong with the server displaying the summary when using -log_summary
with ex2f.F. I'm trying it again. <br>
<br>
Anyway comparing just kspsolve between the two, the speedup is about
2.7. However, I noticed that for the 4 processors one, its
MatAssemblyBegin is 1.5158e+02, which is more than KSPSolve's
4.7041e+00. So is MatAssemblyBegin's time included in KSPSolve? If not,
does it mean that there's something wrong about my MatAssemblyBegin? <br>
<br>
Thank you<br>
<br>
<b>For 1 processor:</b><br>
<br>
************************************************************************************************************************<br>
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
-fCourier9' to print this document ***<br>
************************************************************************************************************************<br>
<br>
---------------------------------------------- PETSc Performance
Summary: ----------------------------------------------<br>
<br>
./a.out on a atlas3 named atlas3-c28 with 1 processor, by g0306332 Fri
Apr 18 08:46:11 2008<br>
Using Petsc Release Version 2.3.3, Patch 8, Fri Nov 16 17:03:40 CST
2007 HG revision: 414581156e67e55c761739b0deb119f7590d0f4b<br>
<br>
Max Max/Min Avg Total<br>
Time (sec): 1.322e+02 1.00000 1.322e+02<br>
Objects: 2.200e+01 1.00000 2.200e+01<br>
Flops: 2.242e+08 1.00000 2.242e+08 2.242e+08<br>
Flops/sec: 1.696e+06 1.00000 1.696e+06 1.696e+06<br>
MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00<br>
MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00<br>
MPI Reductions: 2.100e+01 1.00000<br>
<br>
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)<br>
e.g., VecAXPY() for real vectors of length
N --> 2N flops<br>
and VecAXPY() for complex vectors of length
N --> 8N flops<br>
<br>
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages
--- -- Message Lengths -- -- Reductions --<br>
Avg %Total Avg %Total counts
%Total Avg %Total counts %Total<br>
0: Main Stage: 1.3217e+02 100.0% 2.2415e+08 100.0% 0.000e+00
0.0% 0.000e+00 0.0% 2.100e+01 100.0%<br>
<br>
------------------------------------------------------------------------------------------------------------------------<br>
See the 'Profiling' chapter of the users' manual for details on
interpreting output.<br>
Phase summary info: <br>
Count: number of times phase was executed<br>
Time and Flops/sec: Max - maximum over all processors<br>
Ratio - ratio of maximum to minimum over all
processors<br>
Mess: number of messages sent<br>
Avg. len: average message length<br>
Reduct: number of global reductions<br>
Global: entire computation<br>
Stage: stages of a computation. Set stages with PetscLogStagePush()
and PetscLogStagePop().<br>
%T - percent time in this phase %F - percent flops in
this phase<br>
%M - percent messages in this phase %L - percent message
lengths in this phase<br>
%R - percent reductions in this phase<br>
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time
over all processors)<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
<br>
##########################################################<br>
# #<br>
# WARNING!!! #<br>
# #<br>
# This code was run without the PreLoadBegin() #<br>
# macros. To get timing results we always recommend #<br>
# preloading. otherwise timing numbers may be #<br>
# meaningless. #<br>
##########################################################<br>
<br>
<br>
<br>
Event Count Time (sec)
Flops/sec --- Global --- --- Stage --- Total<br>
Max Ratio Max Ratio Max Ratio Mess Avg
len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
MatMult 6 1.0 1.8572e-01 1.0 3.77e+08 1.0 0.0e+00
0.0e+00 0.0e+00 0 31 0 0 0 0 31 0 0 0 377<br>
MatConvert 1 1.0 1.1636e+00 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br>
MatAssemblyBegin 1 1.0 2.1458e-06 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatAssemblyEnd 1 1.0 8.8531e-02 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetRow 1296000 1.0 2.6576e-01 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetRowIJ 1 1.0 4.0531e-06 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatZeroEntries 1 1.0 4.4700e-02 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
KSPGMRESOrthog 6 1.0 2.1104e-01 1.0 5.16e+08 1.0 0.0e+00
0.0e+00 6.0e+00 0 49 0 0 29 0 49 0 0 29 516<br>
KSPSetup 1 1.0 6.5601e-02 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
<b>KSPSolve 1 1.0 1.2883e+01 1.0 1.74e+07 1.0 0.0e+00
0.0e+00 1.5e+01 10100 0 0 71 10100 0 0 71 17</b><br>
PCSetUp 1 1.0 4.4342e+00 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 2.0e+00 3 0 0 0 10 3 0 0 0 10 0<br>
PCApply 7 1.0 7.7337e+00 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 6 0 0 0 0 6 0 0 0 0 0<br>
VecMDot 6 1.0 9.8586e-02 1.0 5.52e+08 1.0 0.0e+00
0.0e+00 6.0e+00 0 24 0 0 29 0 24 0 0 29 552<br>
VecNorm 7 1.0 6.9757e-02 1.0 2.60e+08 1.0 0.0e+00
0.0e+00 7.0e+00 0 8 0 0 33 0 8 0 0 33 260<br>
VecScale 7 1.0 2.9803e-02 1.0 3.04e+08 1.0 0.0e+00
0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 304<br>
VecCopy 1 1.0 6.1009e-03 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecSet 9 1.0 3.1438e-02 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecAXPY 1 1.0 7.5161e-03 1.0 3.45e+08 1.0 0.0e+00
0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 345<br>
VecMAXPY 7 1.0 1.4444e-01 1.0 4.85e+08 1.0 0.0e+00
0.0e+00 0.0e+00 0 31 0 0 0 0 31 0 0 0 485<br>
VecAssemblyBegin 2 1.0 4.2915e-05 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 6.0e+00 0 0 0 0 29 0 0 0 0 29 0<br>
VecAssemblyEnd 2 1.0 6.9141e-06 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecNormalize 7 1.0 9.9603e-02 1.0 2.73e+08 1.0 0.0e+00
0.0e+00 7.0e+00 0 12 0 0 33 0 12 0 0 33 273<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
Memory usage is given in bytes:<br>
<br>
Object Type Creations Destructions Memory Descendants'
Mem.<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
Matrix 1 1 98496004 0<br>
Krylov Solver 1 1 17216 0<br>
Preconditioner 1 1 272 0<br>
Vec 19 19 186638392 0<br>
========================================================================================================================<br>
Average time to get PetscTime(): 9.53674e-08<br>
OptionTable: -log_summary<br>
Compiled without FORTRAN kernels<br>
Compiled with full precision matrices (default)<br>
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8<br>
Configure run at: Wed Jan 9 14:33:02 2008<br>
Configure options: --with-cc=icc --with-fc=ifort --with-x=0
--with-blas-lapack-dir=/opt/intel/cmkl/8.1.1/lib/em64t --with-shared
--with-mpi-dir=/lsftmp/g0306332/mpich2/ --with-debugging=0
--with-hypre-dir=/home/enduser/g0306332/lib/hypre_shared<br>
-----------------------------------------<br>
Libraries compiled on Wed Jan 9 14:33:36 SGT 2008 on atlas3-c01<br>
Machine characteristics: Linux atlas3-c01 2.6.9-42.ELsmp #1 SMP Wed Jul
12 23:32:02 EDT 2006 x86_64 x86_64 x86_64 GNU/Linux<br>
Using PETSc directory: /home/enduser/g0306332/petsc-2.3.3-p8<br>
Using PETSc arch: atlas3<br>
-----------------------------------------<br>
Using C compiler: icc -fPIC -O<br>
<br>
<b>for 4 processors</b><br>
<br>
************************************************************************************************************************<br>
*** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r
-fCourier9' to print this document ***<br>
************************************************************************************************************************<br>
<br>
---------------------------------------------- PETSc Performance
Summary: ----------------------------------------------<br>
<br>
./a.out on a atlas3-mp named atlas3-c23 with 4 processors, by g0306332
Fri Apr 18 08:22:11 2008<br>
Using Petsc Release Version 2.3.3, Patch 8, Fri Nov 16 17:03:40 CST
2007 HG revision: 414581156e67e55c761739b0deb119f7590d0f4b<br>
<br>
Max Max/Min Avg Total<br>
0.000000000000000E+000 58.1071298622710<br>
0.000000000000000E+000 58.1071298622710<br>
0.000000000000000E+000 58.1071298622710<br>
0.000000000000000E+000 58.1071298622710 <br>
Time (sec): 3.308e+02 1.00177 3.305e+02<br>
Objects: 2.900e+01 1.00000 2.900e+01 <br>
Flops: 5.605e+07 1.00026 5.604e+07 2.242e+08<br>
Flops/sec: 1.697e+05 1.00201 1.695e+05 6.782e+05<br>
MPI Messages: 1.400e+01 2.00000 1.050e+01 4.200e+01<br>
MPI Message Lengths: 1.248e+05 2.00000 8.914e+03 3.744e+05<br>
MPI Reductions: 7.500e+00 1.00000<br>
<br>
Flop counting convention: 1 flop = 1 real number operation of type
(multiply/divide/add/subtract)<br>
e.g., VecAXPY() for real vectors of length
N --> 2N flops<br>
and VecAXPY() for complex vectors of length
N --> 8N flops<br>
<br>
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages
--- -- Message Lengths -- -- Reductions --<br>
Avg %Total Avg %Total counts
%Total Avg %Total counts %Total<br>
0: Main Stage: 3.3051e+02 100.0% 2.2415e+08 100.0% 4.200e+01
100.0% 8.914e+03 100.0% 3.000e+01 100.0%<br>
<br>
------------------------------------------------------------------------------------------------------------------------<br>
See the 'Profiling' chapter of the users' manual for details on
interpreting output.<br>
Phase summary info: <br>
Count: number of times phase was executed<br>
Time and Flops/sec: Max - maximum over all processors<br>
Ratio - ratio of maximum to minimum over all
processors<br>
Mess: number of messages sent<br>
Avg. len: average message length<br>
Reduct: number of global reductions<br>
Global: entire computation<br>
Stage: stages of a computation. Set stages with PetscLogStagePush()
and PetscLogStagePop().<br>
%T - percent time in this phase %F - percent flops in
this phase<br>
%M - percent messages in this phase %L - percent message
lengths in this phase<br>
%R - percent reductions in this phase<br>
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time
over all processors)<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
<br>
##########################################################<br>
# #<br>
# WARNING!!! #<br>
# #<br>
# This code was run without the PreLoadBegin() #<br>
# macros. To get timing results we always recommend #<br>
# preloading. otherwise timing numbers may be #<br>
# meaningless. #<br>
# preloading. otherwise timing numbers may be #<br>
# meaningless. #<br>
##########################################################<br>
<br>
<br>
Event Count Time (sec)
Flops/sec --- Global --- --- Stage --- Total<br>
Max Ratio Max Ratio Max Ratio Mess Avg
len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
MatMult 6 1.0 8.2640e-02 1.6 3.37e+08 1.6 3.6e+01
9.6e+03 0.0e+00 0 31 86 92 0 0 31 86 92 0 846<br>
MatConvert 1 1.0 2.1472e-01 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
<b>MatAssemblyBegin 1 1.0 1.5158e+022254.7 0.00e+00 0.0 0.0e+00
0.0e+00 2.0e+00 22 0 0 0 7 22 0 0 0 7 0</b><br>
MatAssemblyEnd 1 1.0 1.5766e-01 1.1 0.00e+00 0.0 6.0e+00
4.8e+03 7.0e+00 0 0 14 8 23 0 0 14 8 23 0<br>
MatGetRow 324000 1.0 8.9608e-02 1.1 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetRowIJ 2 1.0 5.9605e-06 2.8 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatZeroEntries 1 1.0 5.8902e-02 2.3 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
KSPGMRESOrthog 6 1.0 1.1247e-01 1.7 4.11e+08 1.7 0.0e+00
0.0e+00 6.0e+00 0 49 0 0 20 0 49 0 0 20 968<br>
KSPSetup 1 1.0 1.5483e-02 1.2 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
<b>KSPSolve 1 1.0 4.7041e+00 1.0 1.19e+07 1.0 3.6e+01
9.6e+03 1.5e+01 1100 86 92 50 1100 86 92 50 48</b><br>
PCSetUp 1 1.0 1.5953e+00 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 2.0e+00 0 0 0 0 7 0 0 0 0 7 0<br>
PCApply 7 1.0 2.6580e+00 1.0 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br>
VecMDot 6 1.0 7.3443e-02 2.2 4.13e+08 2.2 0.0e+00
0.0e+00 6.0e+00 0 24 0 0 20 0 24 0 0 20 741<br>
VecNorm 7 1.0 2.5193e-01 1.1 1.94e+07 1.1 0.0e+00
0.0e+00 7.0e+00 0 8 0 0 23 0 8 0 0 23 72<br>
VecScale 7 1.0 6.6319e-03 2.8 9.64e+08 2.8 0.0e+00
0.0e+00 0.0e+00 0 4 0 0 0 0 4 0 0 0 1368<br>
VecCopy 1 1.0 2.3100e-03 1.3 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecSet 9 1.0 1.4173e-02 1.5 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecAXPY 1 1.0 2.9502e-03 1.7 3.72e+08 1.7 0.0e+00
0.0e+00 0.0e+00 0 1 0 0 0 0 1 0 0 0 879<br>
VecMAXPY 7 1.0 4.9046e-02 1.4 5.09e+08 1.4 0.0e+00
0.0e+00 0.0e+00 0 31 0 0 0 0 31 0 0 0 1427<br>
VecAssemblyBegin 2 1.0 4.3297e-04 3.1 0.00e+00 0.0 0.0e+00
0.0e+00 6.0e+00 0 0 0 0 20 0 0 0 0 20 0<br>
VecAssemblyEnd 2 1.0 5.2452e-06 1.4 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecScatterBegin 6 1.0 6.9666e-04 6.3 0.00e+00 0.0 3.6e+01
9.6e+03 0.0e+00 0 0 86 92 0 0 0 86 92 0 0<br>
VecScatterEnd 6 1.0 1.4806e-02102.6 0.00e+00 0.0 0.0e+00
0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecNormalize 7 1.0 2.5431e-01 1.1 2.86e+07 1.1 0.0e+00
0.0e+00 7.0e+00 0 12 0 0 23 0 12 0 0 23 107<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
Memory usage is given in bytes:<br>
<br>
Object Type Creations Destructions Memory Descendants'
Mem.<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
Matrix 3 3 49252812 0<br>
Krylov Solver 1 1 17216 0<br>
Preconditioner 1 1 272 0<br>
Index Set 2 2 5488 0<br>
Vec 21 21 49273624 0<br>
Vec Scatter 1 1 0 0<br>
========================================================================================================================<br>
Average time to get PetscTime(): 1.90735e-07<br>
Average time for MPI_Barrier(): 5.62668e-06<br>
Average time for zero size MPI_Send(): 6.73532e-06<br>
OptionTable: -log_summary<br>
Compiled without FORTRAN kernels<br>
Compiled with full precision matrices (default)<br>
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8
sizeof(PetscScalar) 8<br>
Configure run at: Tue Jan 8 22:22:08 2008<br>
<br>
<br>
<blockquote
cite="mid:a9f269830804151033q61b860d4x4e1cf09bcdf1024c@mail.gmail.com"
type="cite">
<blockquote type="cite">
<blockquote type="cite">
<pre wrap="">
</pre>
</blockquote>
<pre wrap="">
</pre>
</blockquote>
<pre wrap=""><!---->
</pre>
</blockquote>
</body>
</html>