<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Mon, Sep 29, 2014 at 8:42 AM, Filippo Leonardi <span dir="ltr"><<a href="mailto:filippo.leonardi@sam.math.ethz.ch" target="_blank">filippo.leonardi@sam.math.ethz.ch</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hi,<br>
<br>
I am trying to solve a standard second order central differenced Poisson<br>
equation in parallel, in 3D, using a 3D structured DMDAs (extremely standard<br>
Laplacian matrix).<br>
<br>
I want to get some nice scaling (especially weak), but my results show that<br>
the Krylow method is not performing as expected. The problem (at leas for CG +<br>
Bjacobi) seems to lie on the number of iterations.<br>
<br>
In particular the number of iterations grows with CG (the matrix is SPD) +<br>
BJacobi as mesh is refined (probably due to condition number increasing) and<br>
number of processors is increased (probably due to the Bjacobi<br>
preconditioner). For instance I tried the following setup:<br>
1 procs to solve 32^3 domain => 20 iterations<br>
8 procs to solve 64^3 domain => 60 iterations<br>
64 procs to solve 128^3 domain => 101 iterations<br>
<br>
Is there something pathological with my runs (maybe I am missing something)?<br>
Is there somebody who can provide me weak scaling benchmarks for equivalent<br>
problems? (Maybe there is some better preconditioner for this problem).<br></blockquote><div><br></div><div><div>Bjacobi is not a scalable preconditioner. As you note, the number of iterates grows</div><div>with the system size. You should always use MG here.</div></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
I am also aware that Multigrid is even better for this problems but the<br>
**scalability** of my runs seems to be as bad as with CG.<br></blockquote><div><br></div><div>MG will weak scale almost perfectly. Send -log_summary for each run if this</div><div>does not happen.</div><div><br></div><div> Thanks,</div><div><br></div><div> Matt</div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
-pc_mg_galerkin<br>
-pc_type mg<br>
(both directly with richardson or as preconditioner to cg)<br>
<br>
The following is the "-log_summary" of a 128^3 run, notice that I solve the<br>
system multiple times (hence KSPSolve is multiplied by 128). Using CG +<br>
BJacobi.<br>
<br>
Tell me if I missed some detail and sorry for the length of the post.<br>
<br>
Thanks,<br>
Filippo<br>
<br>
Using Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT 2012<br>
<br>
Max Max/Min Avg Total<br>
Time (sec): 9.095e+01 1.00001 9.095e+01<br>
Objects: 1.875e+03 1.00000 1.875e+03<br>
Flops: 1.733e+10 1.00000 1.733e+10 1.109e+12<br>
Flops/sec: 1.905e+08 1.00001 1.905e+08 1.219e+10<br>
MPI Messages: 1.050e+05 1.00594 1.044e+05 6.679e+06<br>
MPI Message Lengths: 1.184e+09 1.37826 8.283e+03 5.532e+10<br>
MPI Reductions: 4.136e+04 1.00000<br>
<br>
Flop counting convention: 1 flop = 1 real number operation of type<br>
(multiply/divide/add/subtract)<br>
e.g., VecAXPY() for real vectors of length N --><br>
2N flops<br>
and VecAXPY() for complex vectors of length N --><br>
8N flops<br>
<br>
Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages ---<br>
-- Message Lengths -- -- Reductions --<br>
Avg %Total Avg %Total counts %Total<br>
Avg %Total counts %Total<br>
0: Main Stage: 1.1468e-01 0.1% 0.0000e+00 0.0% 0.000e+00 0.0%<br>
0.000e+00 0.0% 0.000e+00 0.0%<br>
1: StepStage: 4.4170e-01 0.5% 7.2478e+09 0.7% 0.000e+00 0.0%<br>
0.000e+00 0.0% 0.000e+00 0.0%<br>
2: ConvStage: 8.8333e+00 9.7% 3.7044e+10 3.3% 1.475e+06 22.1%<br>
1.809e+03 21.8% 0.000e+00 0.0%<br>
3: ProjStage: 7.7169e+01 84.8% 1.0556e+12 95.2% 5.151e+06 77.1%<br>
6.317e+03 76.3% 4.024e+04 97.3%<br>
4: IoStage: 2.4789e+00 2.7% 0.0000e+00 0.0% 3.564e+03 0.1%<br>
1.017e+02 1.2% 5.000e+01 0.1%<br>
5: SolvAlloc: 7.0947e-01 0.8% 0.0000e+00 0.0% 5.632e+03 0.1%<br>
9.587e-01 0.0% 3.330e+02 0.8%<br>
6: SolvSolve: 1.2044e+00 1.3% 9.1679e+09 0.8% 4.454e+04 0.7%<br>
5.464e+01 0.7% 7.320e+02 1.8%<br>
7: SolvDeall: 7.5711e-04 0.0% 0.0000e+00 0.0% 0.000e+00 0.0%<br>
0.000e+00 0.0% 0.000e+00 0.0%<br>
<br>
------------------------------------------------------------------------------------------------------------------------<br>
See the 'Profiling' chapter of the users' manual for details on interpreting<br>
output.<br>
Phase summary info:<br>
Count: number of times phase was executed<br>
Time and Flops: Max - maximum over all processors<br>
Ratio - ratio of maximum to minimum over all processors<br>
Mess: number of messages sent<br>
Avg. len: average message length<br>
Reduct: number of global reductions<br>
Global: entire computation<br>
Stage: stages of a computation. Set stages with PetscLogStagePush() and<br>
PetscLogStagePop().<br>
%T - percent time in this phase %f - percent flops in this phase<br>
%M - percent messages in this phase %L - percent message lengths in<br>
this phase<br>
%R - percent reductions in this phase<br>
Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all<br>
processors)<br>
------------------------------------------------------------------------------------------------------------------------<br>
Event Count Time (sec) Flops<br>
--- Global --- --- Stage --- Total<br>
Max Ratio Max Ratio Max Ratio Mess Avg len<br>
Reduct %T %f %M %L %R %T %f %M %L %R Mflop/s<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
<br>
--- Event Stage 1: StepStage<br>
<br>
VecAXPY 1536 1.0 4.6436e-01 1.1 1.13e+08 1.0 0.0e+00 0.0e+00<br>
0.0e+00 0 1 0 0 0 99100 0 0 0 15608<br>
<br>
--- Event Stage 2: ConvStage<br>
<br>
VecCopy 2304 1.0 8.1658e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 1 0 0 0 0 9 0 0 0 0 0<br>
VecAXPY 2304 1.0 6.1324e-01 1.2 1.51e+08 1.0 0.0e+00 0.0e+00<br>
0.0e+00 1 1 0 0 0 6 26 0 0 0 15758<br>
VecAXPBYCZ 2688 1.0 1.3029e+00 1.1 3.52e+08 1.0 0.0e+00 0.0e+00<br>
0.0e+00 1 2 0 0 0 14 61 0 0 0 17306<br>
VecPointwiseMult 2304 1.0 7.2368e-01 1.0 7.55e+07 1.0 0.0e+00 0.0e+00<br>
0.0e+00 1 0 0 0 0 8 13 0 0 0 6677<br>
VecScatterBegin 3840 1.0 1.8182e+00 1.3 0.00e+00 0.0 1.5e+06 8.2e+03<br>
0.0e+00 2 0 22 22 0 18 0100100 0 0<br>
VecScatterEnd 3840 1.0 1.1972e+00 2.2 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 1 0 0 0 0 10 0 0 0 0 0<br>
<br>
--- Event Stage 3: ProjStage<br>
<br>
VecTDot 25802 1.0 4.2552e+00 1.3 1.69e+09 1.0 0.0e+00 0.0e+00<br>
2.6e+04 4 10 0 0 62 5 10 0 0 64 25433<br>
VecNorm 13029 1.0 3.0772e+00 3.3 8.54e+08 1.0 0.0e+00 0.0e+00<br>
1.3e+04 2 5 0 0 32 2 5 0 0 32 17759<br>
VecCopy 640 1.0 2.4339e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecSet 13157 1.0 7.0903e-01 1.1 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br>
VecAXPY 26186 1.0 4.1462e+00 1.1 1.72e+09 1.0 0.0e+00 0.0e+00<br>
0.0e+00 4 10 0 0 0 5 10 0 0 0 26490<br>
VecAYPX 12773 1.0 1.9135e+00 1.1 8.37e+08 1.0 0.0e+00 0.0e+00<br>
0.0e+00 2 5 0 0 0 2 5 0 0 0 27997<br>
VecScatterBegin 13413 1.0 1.0689e+00 1.1 0.00e+00 0.0 5.2e+06 8.2e+03<br>
0.0e+00 1 0 77 76 0 1 0100100 0 0<br>
VecScatterEnd 13413 1.0 2.7944e+00 1.7 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 2 0 0 0 0 3 0 0 0 0 0<br>
MatMult 12901 1.0 3.2072e+01 1.0 5.92e+09 1.0 5.0e+06 8.2e+03<br>
0.0e+00 35 34 74 73 0 41 36 96 96 0 11810<br>
MatSolve 13029 1.0 3.0851e+01 1.1 5.39e+09 1.0 0.0e+00 0.0e+00<br>
0.0e+00 33 31 0 0 0 39 33 0 0 0 11182<br>
MatLUFactorNum 128 1.0 1.2922e+00 1.0 8.80e+07 1.0 0.0e+00 0.0e+00<br>
0.0e+00 1 1 0 0 0 2 1 0 0 0 4358<br>
MatILUFactorSym 128 1.0 7.5075e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00<br>
1.3e+02 1 0 0 0 0 1 0 0 0 0 0<br>
MatGetRowIJ 128 1.0 1.4782e-04 1.8 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetOrdering 128 1.0 5.7567e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00<br>
2.6e+02 0 0 0 0 1 0 0 0 0 1 0<br>
KSPSetUp 256 1.0 1.9913e-01 1.6 0.00e+00 0.0 0.0e+00 0.0e+00<br>
7.7e+02 0 0 0 0 2 0 0 0 0 2 0<br>
KSPSolve 128 1.0 7.6381e+01 1.0 1.65e+10 1.0 5.0e+06 8.2e+03<br>
4.0e+04 84 95 74 73 97 99100 96 96100 13800<br>
PCSetUp 256 1.0 2.1503e+00 1.0 8.80e+07 1.0 0.0e+00 0.0e+00<br>
6.4e+02 2 1 0 0 2 3 1 0 0 2 2619<br>
PCSetUpOnBlocks 128 1.0 2.1232e+00 1.0 8.80e+07 1.0 0.0e+00 0.0e+00<br>
3.8e+02 2 1 0 0 1 3 1 0 0 1 2652<br>
PCApply 13029 1.0 3.1812e+01 1.1 5.39e+09 1.0 0.0e+00 0.0e+00<br>
0.0e+00 34 31 0 0 0 40 33 0 0 0 10844<br>
<br>
--- Event Stage 4: IoStage<br>
<br>
VecView 10 1.0 1.7523e+00282.9 0.00e+00 0.0 0.0e+00 0.0e+00<br>
2.0e+01 1 0 0 0 0 36 0 0 0 40 0<br>
VecCopy 10 1.0 2.2449e-03 1.7 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecScatterBegin 6 1.0 2.3620e-03 2.4 0.00e+00 0.0 2.3e+03 8.2e+03<br>
0.0e+00 0 0 0 0 0 0 0 65 3 0 0<br>
VecScatterEnd 6 1.0 4.4194e-01663.9 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 9 0 0 0 0 0<br>
<br>
--- Event Stage 5: SolvAlloc<br>
<br>
VecSet 50 1.0 1.3170e-01 5.6 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 13 0 0 0 0 0<br>
MatAssemblyBegin 4 1.0 3.9801e-0230.0 0.00e+00 0.0 0.0e+00 0.0e+00<br>
8.0e+00 0 0 0 0 0 3 0 0 0 2 0<br>
MatAssemblyEnd 4 1.0 2.2752e-02 1.0 0.00e+00 0.0 1.5e+03 2.0e+03<br>
1.6e+01 0 0 0 0 0 3 0 27 49 5 0<br>
<br>
--- Event Stage 6: SolvSolve<br>
<br>
VecTDot 224 1.0 3.5454e-02 1.3 1.47e+07 1.0 0.0e+00 0.0e+00<br>
2.2e+02 0 0 0 0 1 3 10 0 0 31 26499<br>
VecNorm 497 1.0 1.5268e-01 1.4 7.41e+06 1.0 0.0e+00 0.0e+00<br>
5.0e+02 0 0 0 0 1 11 5 0 0 68 3104<br>
VecCopy 8 1.0 2.7523e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecSet 114 1.0 5.9965e-03 1.1 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
VecAXPY 230 1.0 3.7198e-02 1.1 1.51e+07 1.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 3 11 0 0 0 25934<br>
VecAYPX 111 1.0 1.7153e-02 1.1 7.27e+06 1.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 1 5 0 0 0 27142<br>
VecScatterBegin 116 1.0 1.1888e-02 1.2 0.00e+00 0.0 4.5e+04 8.2e+03<br>
0.0e+00 0 0 1 1 0 1 0100100 0 0<br>
VecScatterEnd 116 1.0 2.8105e-02 2.0 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 2 0 0 0 0 0<br>
MatMult 112 1.0 2.8080e-01 1.0 5.14e+07 1.0 4.3e+04 8.2e+03<br>
0.0e+00 0 0 1 1 0 23 36 97 97 0 11711<br>
MatSolve 113 1.0 2.6673e-01 1.1 4.67e+07 1.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 22 33 0 0 0 11217<br>
MatLUFactorNum 1 1.0 1.0332e-02 1.0 6.87e+05 1.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 1 0 0 0 0 4259<br>
MatILUFactorSym 1 1.0 3.1291e-02 4.2 0.00e+00 0.0 0.0e+00 0.0e+00<br>
1.0e+00 0 0 0 0 0 2 0 0 0 0 0<br>
MatGetRowIJ 1 1.0 4.0531e-06 4.2 0.00e+00 0.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
MatGetOrdering 1 1.0 3.4251e-03 5.4 0.00e+00 0.0 0.0e+00 0.0e+00<br>
2.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
KSPSetUp 2 1.0 3.6959e-0210.1 0.00e+00 0.0 0.0e+00 0.0e+00<br>
6.0e+00 0 0 0 0 0 1 0 0 0 1 0<br>
KSPSolve 1 1.0 6.9956e-01 1.0 1.43e+08 1.0 4.3e+04 8.2e+03<br>
3.5e+02 1 1 1 1 1 58100 97 97 48 13069<br>
PCSetUp 2 1.0 4.4161e-02 2.3 6.87e+05 1.0 0.0e+00 0.0e+00<br>
5.0e+00 0 0 0 0 0 3 0 0 0 1 996<br>
PCSetUpOnBlocks 1 1.0 4.3894e-02 2.4 6.87e+05 1.0 0.0e+00 0.0e+00<br>
3.0e+00 0 0 0 0 0 3 0 0 0 0 1002<br>
PCApply 113 1.0 2.7507e-01 1.1 4.67e+07 1.0 0.0e+00 0.0e+00<br>
0.0e+00 0 0 0 0 0 22 33 0 0 0 10877<br>
<br>
--- Event Stage 7: SolvDeall<br>
<br>
------------------------------------------------------------------------------------------------------------------------<br>
<br>
Memory usage is given in bytes:<br>
<br>
Object Type Creations Destructions Memory Descendants' Mem.<br>
Reports information only for process 0.<br>
<br>
--- Event Stage 0: Main Stage<br>
<br>
Viewer 1 0 0 0<br>
<br>
--- Event Stage 1: StepStage<br>
<br>
<br>
--- Event Stage 2: ConvStage<br>
<br>
<br>
--- Event Stage 3: ProjStage<br>
<br>
Vector 640 640 101604352 0<br>
Matrix 128 128 410327040 0<br>
Index Set 384 384 17062912 0<br>
Krylov Solver 256 256 282624 0<br>
Preconditioner 256 256 228352 0<br>
<br>
--- Event Stage 4: IoStage<br>
<br>
Vector 10 10 2636400 0<br>
Viewer 10 10 6880 0<br>
<br>
--- Event Stage 5: SolvAlloc<br>
<br>
Vector 140 6 8848 0<br>
Vector Scatter 6 0 0 0<br>
Matrix 6 0 0 0<br>
Distributed Mesh 2 0 0 0<br>
Bipartite Graph 4 0 0 0<br>
Index Set 14 14 372400 0<br>
IS L to G Mapping 3 0 0 0<br>
Krylov Solver 1 0 0 0<br>
Preconditioner 1 0 0 0<br>
<br>
--- Event Stage 6: SolvSolve<br>
<br>
Vector 5 0 0 0<br>
Matrix 1 0 0 0<br>
Index Set 3 0 0 0<br>
Krylov Solver 2 1 1136 0<br>
Preconditioner 2 1 824 0<br>
<br>
--- Event Stage 7: SolvDeall<br>
<br>
Vector 0 133 36676728 0<br>
Vector Scatter 0 1 1036 0<br>
Matrix 0 4 7038924 0<br>
Index Set 0 3 133304 0<br>
Krylov Solver 0 2 2208 0<br>
Preconditioner 0 2 1784 0<br>
========================================================================================================================<br>
Average time to get PetscTime(): 9.53674e-08<br>
Average time for MPI_Barrier(): 1.12057e-05<br>
Average time for zero size MPI_Send(): 1.3113e-06<br>
#PETSc Option Table entries:<br>
-ksp_type cg<br>
-log_summary<br>
-pc_type bjacobi<br>
#End of PETSc Option Table entries<br>
Compiled without FORTRAN kernels<br>
Compiled with full precision matrices (default)<br>
sizeof(short) 2 sizeof(int) 4 sizeof(long) 8 sizeof(void*) 8<br>
sizeof(PetscScalar) 8 sizeof(PetscInt) 4<br>
Configure run at:<br>
Configure options:<br>
Application 9457215 resources: utime ~5920s, stime ~58s<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.<br>-- Norbert Wiener
</div></div>