proc:0 (i,j,k)=(0,0,0) : (0,1,2,3,4,5,6,7)=(0,1,45,46,2025,2026,2070,2071) proc:1 (i,j,k)=(0,0,44) : (0,1,2,3,4,5,6,7)=(89100,89101,89145,89146,0,1,45,46) proc:0 (i,j,k)=(0,44,0) : (0,1,2,3,4,5,6,7)=(1980,1981,0,1,4005,4006,2025,2026) proc:1 (i,j,k)=(0,44,44) : (0,1,2,3,4,5,6,7)=(91080,91081,89100,89101,1980,1981,0,1) proc:1 (i,j,k)=(44,0,44) : (0,1,2,3,4,5,6,7)=(89144,89100,89189,89145,44,0,89,45) proc:1 (i,j,k)=(44,44,44) : (0,1,2,3,4,5,6,7)=(91124,91080,89144,89100,2024,1980,44,0) proc:0 (i,j,k)=(44,0,0) : (0,1,2,3,4,5,6,7)=(44,0,89,45,2069,2025,2114,2070) proc:0 (i,j,k)=(44,44,0) : (0,1,2,3,4,5,6,7)=(2024,1980,44,0,4049,4005,2069,2025) Iteration 4: ASM-JACOBI RAMmonitor: KSP_Converged(): Linear solver has converged. Residual norm 6.219287e-06 is less than absolute tolerance 1.000000e-05 at Iteration 729 KSP Object: type: cgs maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: type: asm Additive Schwarz: total subdomain blocks = 2, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT [0] number of local blocks = 1 [1] number of local blocks = 1 Local solve info for each block is in the following KSP and PC objects: - - - - - - - - - - - - - - - - - - [0] local block number 0, size = 50625 KSP Object:(sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(sub_) type: jacobi linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=50625, cols=50625 total: nonzeros=1330425, allocated nonzeros=1330425 not using I-node routines - - - - - - - - - - - - - - - - - - [1] local block number 0, size = 48600 KSP Object:(sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(sub_) type: jacobi linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=48600, cols=48600 total: nonzeros=1275750, allocated nonzeros=1275750 not using I-node routines - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=91125, cols=91125 total: nonzeros=2460375, allocated nonzeros=2460375 not using I-node (on process 0) routines Residual Norm: 0.000006219287442364 Iteration 5: ASM-SOR RAMmonitor: KSP_Converged(): Linear solver has converged. Residual norm 2.700407e-06 is less than absolute tolerance 1.000000e-05 at Iteration 468 KSP Object: type: cgs maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: type: asm Additive Schwarz: total subdomain blocks = 2, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT [0] number of local blocks = 1 [1] number of local blocks = 1 Local solve info for each block is in the following KSP and PC objects: - - - - - - - - - - - - - - - - - - [0] local block number 0, size = 50625 KSP Object:(sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(sub_) type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=50625, cols=50625 total: nonzeros=1330425, allocated nonzeros=1330425 not using I-node routines - - - - - - - - - - - - - - - - - - [1] local block number 0, size = 48600 KSP Object:(sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(sub_) type: sor SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1 linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=48600, cols=48600 total: nonzeros=1275750, allocated nonzeros=1275750 not using I-node routines - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=91125, cols=91125 total: nonzeros=2460375, allocated nonzeros=2460375 not using I-node (on process 0) routines Residual Norm: 0.000002700406533837 Iteration 6: ASM-ILU RAMmonitor: KSP_Converged(): Linear solver has converged. Residual norm 9.457644e-06 is less than absolute tolerance 1.000000e-05 at Iteration 158 KSP Object: type: cgs maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object: type: asm Additive Schwarz: total subdomain blocks = 2, amount of overlap = 1 Additive Schwarz: restriction/interpolation type - RESTRICT [0] number of local blocks = 1 [1] number of local blocks = 1 Local solve info for each block is in the following KSP and PC objects: - - - - - - - - - - - - - - - - - - [0] local block number 0, size = 50625 KSP Object:(sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(sub_) type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 1e-12 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: type=seqaij, rows=50625, cols=50625 package used to perform factorization: petsc total: nonzeros=1330425, allocated nonzeros=1330425 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=50625, cols=50625 total: nonzeros=1330425, allocated nonzeros=1330425 not using I-node routines - - - - - - - - - - - - - - - - - - [1] local block number 0, size = 48600 KSP Object:(sub_) type: preonly maximum iterations=10000, initial guess is zero tolerances: relative=1e-05, absolute=1e-50, divergence=10000 left preconditioning using PRECONDITIONED norm type for convergence test PC Object:(sub_) type: ilu ILU: out-of-place factorization 0 levels of fill tolerance for zero pivot 1e-12 using diagonal shift to prevent zero pivot matrix ordering: natural factor fill ratio given 1, needed 1 Factored matrix follows: Matrix Object: type=seqaij, rows=48600, cols=48600 package used to perform factorization: petsc total: nonzeros=1275750, allocated nonzeros=1275750 not using I-node routines linear system matrix = precond matrix: Matrix Object: type=seqaij, rows=48600, cols=48600 total: nonzeros=1275750, allocated nonzeros=1275750 not using I-node routines - - - - - - - - - - - - - - - - - - linear system matrix = precond matrix: Matrix Object: type=mpiaij, rows=91125, cols=91125 total: nonzeros=2460375, allocated nonzeros=2460375 not using I-node (on process 0) routines Residual Norm: 0.000009457643960567 ************************************************************************************************************************ *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document *** ************************************************************************************************************************ ---------------------------------------------- PETSc Performance Summary: ---------------------------------------------- ./RAM_Main on a linux-gnu named swetaketo-pc with 2 processors, by swetaketo Mon Aug 22 15:07:33 2011 Using Petsc Release Version 3.1.0, Patch 8, Thu Mar 17 13:37:48 CDT 2011 Max Max/Min Avg Total Time (sec): 1.022e+02 1.00004 1.022e+02 Objects: 2.773e+03 1.00000 2.773e+03 Flops: 1.485e+10 1.04490 1.453e+10 2.907e+10 Flops/sec: 1.453e+08 1.04486 1.422e+08 2.843e+08 Memory: 5.696e+08 1.04499 1.115e+09 MPI Messages: 6.866e+03 1.00000 6.866e+03 1.373e+04 MPI Message Lengths: 2.225e+08 1.00000 3.241e+04 4.450e+08 MPI Reductions: 1.367e+04 1.00000 Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract) e.g., VecAXPY() for real vectors of length N --> 2N flops and VecAXPY() for complex vectors of length N --> 8N flops Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions -- Avg %Total Avg %Total counts %Total Avg %Total counts %Total 0: Main Stage: 9.1545e-01 0.9% 2.0139e+07 0.1% 3.700e+01 0.3% 3.730e+02 1.2% 5.800e+01 0.4% 1: Iteration 4: 3.4930e+01 34.2% 1.2174e+10 41.9% 7.330e+03 53.4% 1.722e+04 53.1% 6.572e+03 48.1% 2: Iteration 5: 5.0111e+01 49.0% 1.2608e+10 43.4% 4.722e+03 34.4% 1.106e+04 34.1% 4.220e+03 30.9% 3: Iteration 6: 1.6267e+01 15.9% 4.2649e+09 14.7% 1.644e+03 12.0% 3.754e+03 11.6% 1.435e+03 10.5% ------------------------------------------------------------------------------------------------------------------------ See the 'Profiling' chapter of the users' manual for details on interpreting output. Phase summary info: Count: number of times phase was executed Time and Flops: Max - maximum over all processors Ratio - ratio of maximum to minimum over all processors Mess: number of messages sent Avg. len: average message length Reduct: number of global reductions Global: entire computation Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop(). %T - percent time in this phase %F - percent flops in this phase %M - percent messages in this phase %L - percent message lengths in this phase %R - percent reductions in this phase Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors) ------------------------------------------------------------------------------------------------------------------------ ########################################################## # # # WARNING!!! # # # # This code was compiled with a debugging option, # # To get timing results run config/configure.py # # using --with-debugging=no, the performance will # # be generally two or three times faster. # # # ########################################################## Event Count Time (sec) Flops --- Global --- --- Stage --- Total Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s ------------------------------------------------------------------------------------------------------------------------ --- Event Stage 0: Main Stage VecNorm 3 1.0 2.1772e-0211.4 2.79e+05 1.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 1 3 0 0 5 25 VecCopy 4 1.0 3.7718e-04 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 3 1.0 2.1839e-04 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecWAXPY 3 1.0 6.9022e-04 1.1 1.40e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 396 VecScatterBegin 4 1.0 2.2364e-04 1.1 0.00e+00 0.0 8.0e+00 3.2e+04 0.0e+00 0 0 0 0 0 0 0 22 5 0 0 VecScatterEnd 4 1.0 1.8768e-03 6.9 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSetRandom 1 1.0 2.3961e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatMult 4 1.0 6.0907e-02 1.5 9.87e+06 1.0 8.0e+00 3.2e+04 0.0e+00 0 0 0 0 0 6 96 22 5 0 317 MatAssemblyBegin 3 1.0 9.5207e-0210.4 0.00e+00 0.0 6.0e+00 3.5e+05 4.0e+00 0 0 0 0 0 6 0 16 40 7 0 MatAssemblyEnd 3 1.0 4.2151e-02 1.3 0.00e+00 0.0 4.0e+00 8.1e+03 1.1e+01 0 0 0 0 0 4 0 11 1 19 0 MatGetSubMatrice 1 1.0 1.1178e-01 1.1 0.00e+00 0.0 1.0e+01 2.7e+05 5.0e+00 0 0 0 1 0 12 0 27 53 9 0 MatIncreaseOvrlp 1 1.0 1.6910e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 2 0 0 0 3 0 KSPSetup 1 1.0 3.2051e-03 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUp 1 1.0 1.4235e-01 1.1 0.00e+00 0.0 1.4e+01 1.9e+05 1.5e+01 0 0 0 1 0 15 0 38 53 26 0 --- Event Stage 1: Iteration 4 VecDot 1459 1.0 1.6259e+00 1.1 1.36e+08 1.0 0.0e+00 0.0e+00 1.5e+03 2 1 0 0 11 4 2 0 0 22 164 VecNorm 1460 1.0 2.4882e+00 1.1 1.36e+08 1.0 0.0e+00 0.0e+00 1.5e+03 2 1 0 0 11 7 2 0 0 22 107 VecCopy 734 1.0 1.8566e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecSet 2920 1.0 3.7168e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecAXPY 2186 1.0 4.9815e-01 1.1 2.04e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 3 0 0 0 800 VecAYPX 730 1.0 1.7240e-01 1.0 3.40e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 1 0 0 0 386 VecWAXPY 2914 1.0 8.6392e-01 1.0 2.37e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 2 0 0 0 2 4 0 0 0 538 VecPointwiseMult 1459 1.0 4.2773e-01 1.0 7.39e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 1 0 0 0 338 VecScatterBegin 5106 1.0 9.0816e-01 1.0 0.00e+00 0.0 7.3e+03 3.2e+04 0.0e+00 1 0 53 53 0 3 0100100 0 0 VecScatterEnd 5106 1.0 8.4286e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 2 0 0 0 0 0 MatMult 2188 1.0 2.3075e+01 1.0 5.40e+09 1.0 4.4e+03 3.2e+04 0.0e+00 22 36 32 32 0 65 87 60 60 0 458 MatView 2 1.0 1.0586e-04 1.7 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSetup 1 1.0 1.9073e-06 2.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 3.4926e+01 1.0 6.22e+09 1.0 7.3e+03 3.2e+04 6.6e+03 34 42 53 53 48 100100100100100 349 PCSetUp 1 1.0 2.1458e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUpOnBlocks 1 1.0 5.9605e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 1459 1.0 5.3824e+00 1.1 7.39e+07 1.0 2.9e+03 3.2e+04 2.9e+03 5 0 21 21 21 15 1 40 40 44 27 --- Event Stage 2: Iteration 5 VecDot 937 1.0 2.8160e+00 1.1 8.73e+07 1.0 0.0e+00 0.0e+00 9.4e+02 3 1 0 0 7 5 1 0 0 22 61 VecNorm 938 1.0 1.6711e+00 1.0 8.74e+07 1.0 0.0e+00 0.0e+00 9.4e+02 2 1 0 0 7 3 1 0 0 22 102 VecCopy 473 1.0 9.2594e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 1875 1.0 2.7216e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecAXPY 1403 1.0 3.2561e-01 1.2 1.31e+08 1.0 0.0e+00 0.0e+00 0.0e+00 0 1 0 0 0 1 2 0 0 0 785 VecAYPX 469 1.0 1.2547e-01 1.1 2.18e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 341 VecWAXPY 1870 1.0 5.4331e-01 1.1 1.52e+08 1.0 0.0e+00 0.0e+00 0.0e+00 1 1 0 0 0 1 2 0 0 0 549 VecScatterBegin 3279 1.0 6.9980e-01 1.0 0.00e+00 0.0 4.7e+03 3.2e+04 0.0e+00 1 0 34 34 0 1 0 99100 0 0 VecScatterEnd 3279 1.0 5.5488e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0 MatMult 1405 1.0 1.8448e+01 1.0 3.47e+09 1.0 2.8e+03 3.2e+04 0.0e+00 18 23 20 20 0 36 54 60 60 0 368 MatSOR 937 1.0 1.8491e+01 1.0 2.49e+09 1.0 0.0e+00 0.0e+00 0.0e+00 18 17 0 0 0 37 39 0 0 0 264 MatView 2 1.0 9.2030e-05 1.8 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 5.0108e+01 1.0 6.44e+09 1.0 4.7e+03 3.2e+04 4.2e+03 49 43 34 34 31 100100 99100100 252 PCSetUp 1 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCSetUpOnBlocks 1 1.0 1.1921e-06 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 937 1.0 2.5203e+01 1.0 2.49e+09 1.0 1.9e+03 3.2e+04 1.9e+03 25 17 14 14 14 50 39 40 40 44 194 --- Event Stage 3: Iteration 6 VecDot 317 1.0 7.3261e-01 1.2 2.95e+07 1.0 0.0e+00 0.0e+00 3.2e+02 1 0 0 0 2 4 1 0 0 22 79 VecNorm 318 1.0 6.4263e-01 1.3 2.96e+07 1.0 0.0e+00 0.0e+00 3.2e+02 1 0 0 0 2 3 1 0 0 22 90 VecCopy 163 1.0 3.3268e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 VecSet 635 1.0 1.0968e-01 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 VecAXPY 473 1.0 1.0985e-01 1.1 4.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 785 VecAYPX 159 1.0 3.5006e-02 1.0 7.41e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 414 VecWAXPY 630 1.0 1.7812e-01 1.0 5.13e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 2 0 0 0 564 VecScatterBegin 1109 1.0 2.5543e-01 1.0 0.00e+00 0.0 1.6e+03 3.2e+04 0.0e+00 0 0 12 12 0 2 0 96100 0 0 VecScatterEnd 1109 1.0 1.2567e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 0 0 0 0 0 MatMult 475 1.0 7.0416e+00 1.0 1.17e+09 1.0 9.5e+02 3.2e+04 0.0e+00 7 8 7 7 0 43 54 58 60 0 326 MatSolve 317 1.0 3.0929e+00 1.1 8.27e+08 1.0 0.0e+00 0.0e+00 0.0e+00 3 6 0 0 0 18 38 0 0 0 524 MatLUFactorNum 1 1.0 1.3480e-01 1.5 1.72e+07 1.1 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 1 1 0 0 0 246 MatILUFactorSym 1 1.0 4.3064e-02 1.4 0.00e+00 0.0 0.0e+00 0.0e+00 3.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetRowIJ 1 1.0 1.1921e-06 1.2 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatGetOrdering 1 1.0 2.0192e-02 1.3 0.00e+00 0.0 0.0e+00 0.0e+00 2.0e+00 0 0 0 0 0 0 0 0 0 0 0 MatView 3 1.0 2.7862e-03 2.2 0.00e+00 0.0 0.0e+00 0.0e+00 1.0e+00 0 0 0 0 0 0 0 0 0 0 0 KSPSolve 1 1.0 1.6252e+01 1.0 2.18e+09 1.0 1.6e+03 3.2e+04 1.4e+03 16 15 12 12 10 100100 96100100 262 PCSetUp 1 1.0 1.9785e-01 1.3 1.72e+07 1.1 0.0e+00 0.0e+00 5.0e+00 0 0 0 0 0 1 1 0 0 0 168 PCSetUpOnBlocks 1 1.0 9.5367e-07 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0 PCApply 317 1.0 7.2715e+00 1.0 8.45e+08 1.0 6.3e+02 3.2e+04 6.4e+02 7 6 5 5 5 44 39 39 40 45 227 ------------------------------------------------------------------------------------------------------------------------ Memory usage is given in bytes: Object Type Creations Destructions Memory Descendants' Mem. Reports information only for process 0. --- Event Stage 0: Main Stage Application Order 1 0 0 0 Distributed array 1 0 0 0 Vec 20 20 6117760 0 Vec Scatter 4 2 1736 0 Index Set 11 14 1426956 0 IS L to G Mapping 1 0 0 0 Matrix 4 5 51373132 0 Krylov Solver 2 2 1664 0 Preconditioner 2 2 1760 0 PetscRandom 1 1 448 0 Viewer 0 1 544 0 --- Event Stage 1: Iteration 4 Vec 1462 730 272979120 0 Viewer 2 1 544 0 --- Event Stage 2: Iteration 5 Vec 938 469 175379736 0 Viewer 1 1 544 0 --- Event Stage 3: Iteration 6 Vec 318 159 59457096 0 Index Set 3 0 0 0 Matrix 1 0 0 0 Viewer 1 1 544 0 ======================================================================================================================== Average time to get PetscTime(): 5.00679e-07 Average time for MPI_Barrier(): 0.000738001 Average time for zero size MPI_Send(): 4.79221e-05 #PETSc Option Table entries: -ksp_type cgs -ksp_view -log_summary #End of PETSc Option Table entries Compiled without FORTRAN kernels Compiled with full precision matrices (default) sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8 sizeof(PetscScalar) 8 Configure run at: Sun Aug 14 11:19:34 2011 Configure options: --download-mpich --download-c-blas-lapack=1 ----------------------------------------- Libraries compiled on Ð