<div dir="ltr">Humm, I'm not seeing this...<div><br></div><div><div>15:13 1 mark/gamg-serial ~/Codes/petsc/src/ksp/ksp/examples/tutorials$ mpirun -n 1 ./ex45 -da_refine 3 -pc_type gamg -ksp_monitor -ksp_view -log_summary -pc_gamg_coarse_eq_limit 200</div><div> [0]PCSetUp_GAMG level 0 N=117649, n data rows=1, n data cols=1, nnz/row (ave)=6, np=1</div><div> [0]PCGAMGFilterGraph 91.528% nnz after filtering, with threshold 0, 6.87755 nnz ave. (N=117649)</div><div>[0]PCGAMGCoarsen_AGG square graph</div><div>[0]PCGAMGCoarsen_AGG coarsen graph</div><div> [0]maxIndSetAgg removed 572 of 117649 vertices. (572 local) 16587 selected.</div><div> [0]PCGAMGProlongator_AGG New grid 16587 nodes</div><div> PCGAMGOptprol_AGG smooth P0: max eigen=1.952686e+00 min=9.933674e-03 PC=jacobi</div><div> [0]PCSetUp_GAMG 1) N=16587, n data cols=1, nnz/row (ave)=30, 1 active pes</div><div> [0]PCGAMGFilterGraph 84.7708% nnz after filtering, with threshold 0, 30.1631 nnz ave. (N=16587)</div><div>[0]PCGAMGCoarsen_AGG square graph</div><div>[0]PCGAMGCoarsen_AGG coarsen graph</div><div> [0]maxIndSetAgg removed 0 of 16587 vertices. (0 local) 353 selected.</div><div> [0]PCGAMGProlongator_AGG New grid 353 nodes</div><div> PCGAMGOptprol_AGG smooth P0: max eigen=1.393979e+00 min=2.197135e-02 PC=jacobi</div><div> [0]PCSetUp_GAMG 2) N=353, n data cols=1, nnz/row (ave)=47, 1 active pes</div><div> [0]PCGAMGFilterGraph 99.7358% nnz after filtering, with threshold 0, 47.1756 nnz ave. (N=353)</div><div>[0]PCGAMGCoarsen_AGG square graph</div><div>[0]PCGAMGCoarsen_AGG coarsen graph</div><div> [0]maxIndSetAgg removed 0 of 353 vertices. (0 local) 3 selected.</div><div> [0]PCGAMGProlongator_AGG New grid 3 nodes</div><div> PCGAMGOptprol_AGG smooth P0: max eigen=1.983212e+00 min=2.830095e-01 PC=jacobi</div><div> [0]PCSetUp_GAMG 3) N=3, n data cols=1, nnz/row (ave)=3, 1 active pes</div><div> [0]PCSetUp_GAMG 4 levels, grid complexity = 1.63892</div><div> 0 KSP Residual norm 2.706652282076e+02 </div><div> 1 KSP Residual norm 4.940773628648e+01 </div><div> 2 KSP Residual norm 3.718259719599e+00 </div><div> 3 KSP Residual norm 2.082059791607e-01 </div><div> 4 KSP Residual norm 1.700581360081e-02 </div><div> 5 KSP Residual norm 9.430563655174e-04 </div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Feb 18, 2015 at 11:06 PM, Barry Smith <span dir="ltr"><<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><br>
Hmm, it seems GAMG is only doing 2 levels in master for all problems?<br>
<br></blockquote><div><br></div><div>ksp/ex54 seems to do three levels (with -ne 149) with one proc?<br></div><div><br></div><div>I'll try </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
./ex29 -da_refine 8 -pc_type gamg -ksp_view<br>
<br>
uses only two levels. Makes no sense.<br>
<br>
Did someone break it?<br>
<br>
<br>
<br>
<br>
<br>
<br>
> On Feb 18, 2015, at 9:48 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov" target="_blank">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
><br>
> Mark,<br>
><br>
> When I run ksp/ksp/examples/tutorials/ex45 I get a VERY large coarse problem. It seems to ignore the -pc_gamg_coarse_eq_limit 200 argument. Any idea what is going on?<br>
><br>
> Thanks<br>
><br>
> Barry<br>
><br>
><br>
> $ ./ex45 -da_refine 3 -pc_type gamg -ksp_monitor -ksp_view -log_summary -pc_gamg_coarse_eq_limit 200<br>
> 0 KSP Residual norm 2.790769524030e+02<br>
> 1 KSP Residual norm 4.484052193577e+01<br>
> 2 KSP Residual norm 2.409368790441e+00<br>
> 3 KSP Residual norm 1.553421589919e-01<br>
> 4 KSP Residual norm 9.821441923699e-03<br>
> 5 KSP Residual norm 5.610434857134e-04<br>
> KSP Object: 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=10000<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using PRECONDITIONED norm type for convergence test<br>
> PC Object: 1 MPI processes<br>
> type: gamg<br>
> MG: type is MULTIPLICATIVE, levels=2 cycles=v<br>
> Cycles per PCApply=1<br>
> Using Galerkin computed coarse grid matrices<br>
> Coarse grid solver -- level -------------------------------<br>
> KSP Object: (mg_coarse_) 1 MPI processes<br>
> type: gmres<br>
> GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> GMRES: happy breakdown tolerance 1e-30<br>
> maximum iterations=1, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (mg_coarse_) 1 MPI processes<br>
> type: bjacobi<br>
> block Jacobi: number of blocks = 1<br>
> Local solve is same for all blocks, in the following KSP and PC objects:<br>
> KSP Object: (mg_coarse_sub_) 1 MPI processes<br>
> type: preonly<br>
> maximum iterations=1, initial guess is zero<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using NONE norm type for convergence test<br>
> PC Object: (mg_coarse_sub_) 1 MPI processes<br>
> type: lu<br>
> LU: out-of-place factorization<br>
> tolerance for zero pivot 2.22045e-14<br>
> using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
> matrix ordering: nd<br>
> factor fill ratio given 5, needed 36.4391<br>
> Factored matrix follows:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=16587, cols=16587<br>
> package used to perform factorization: petsc<br>
> total: nonzeros=1.8231e+07, allocated nonzeros=1.8231e+07<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=16587, cols=16587<br>
> total: nonzeros=500315, allocated nonzeros=500315<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=16587, cols=16587<br>
> total: nonzeros=500315, allocated nonzeros=500315<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Down solver (pre-smoother) on level 1 -------------------------------<br>
> KSP Object: (mg_levels_1_) 1 MPI processes<br>
> type: chebyshev<br>
> Chebyshev: eigenvalue estimates: min = 0.0976343, max = 2.05032<br>
> maximum iterations=2<br>
> tolerances: relative=1e-05, absolute=1e-50, divergence=10000<br>
> left preconditioning<br>
> using nonzero initial guess<br>
> using NONE norm type for convergence test<br>
> PC Object: (mg_levels_1_) 1 MPI processes<br>
> type: sor<br>
> SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=117649, cols=117649<br>
> total: nonzeros=809137, allocated nonzeros=809137<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Up solver (post-smoother) same as down solver (pre-smoother)<br>
> linear system matrix = precond matrix:<br>
> Mat Object: 1 MPI processes<br>
> type: seqaij<br>
> rows=117649, cols=117649<br>
> total: nonzeros=809137, allocated nonzeros=809137<br>
> total number of mallocs used during MatSetValues calls =0<br>
> not using I-node routines<br>
> Residual norm 3.81135e-05<br>
> ************************************************************************************************************************<br>
> *** WIDEN YOUR WINDOW TO 120 CHARACTERS. Use 'enscript -r -fCourier9' to print this document ***<br>
> ************************************************************************************************************************<br>
><br>
> ---------------------------------------------- PETSc Performance Summary: ----------------------------------------------<br>
><br>
> ./ex45 on a arch-opt named Barrys-MacBook-Pro.local with 1 processor, by barrysmith Wed Feb 18 21:38:03 2015<br>
> Using Petsc Development GIT revision: v3.5.3-1998-geddef31 GIT Date: 2015-02-18 11:05:09 -0600<br>
><br>
> Max Max/Min Avg Total<br>
> Time (sec): 1.103e+01 1.00000 1.103e+01<br>
> Objects: 9.200e+01 1.00000 9.200e+01<br>
> Flops: 1.756e+10 1.00000 1.756e+10 1.756e+10<br>
> Flops/sec: 1.592e+09 1.00000 1.592e+09 1.592e+09<br>
> MPI Messages: 0.000e+00 0.00000 0.000e+00 0.000e+00<br>
> MPI Message Lengths: 0.000e+00 0.00000 0.000e+00 0.000e+00<br>
> MPI Reductions: 0.000e+00 0.00000<br>
><br>
> Flop counting convention: 1 flop = 1 real number operation of type (multiply/divide/add/subtract)<br>
> e.g., VecAXPY() for real vectors of length N --> 2N flops<br>
> and VecAXPY() for complex vectors of length N --> 8N flops<br>
><br>
> Summary of Stages: ----- Time ------ ----- Flops ----- --- Messages --- -- Message Lengths -- -- Reductions --<br>
> Avg %Total Avg %Total counts %Total Avg %Total counts %Total<br>
> 0: Main Stage: 1.1030e+01 100.0% 1.7556e+10 100.0% 0.000e+00 0.0% 0.000e+00 0.0% 0.000e+00 0.0%<br>
><br>
> ------------------------------------------------------------------------------------------------------------------------<br>
> See the 'Profiling' chapter of the users' manual for details on interpreting output.<br>
> Phase summary info:<br>
> Count: number of times phase was executed<br>
> Time and Flops: Max - maximum over all processors<br>
> Ratio - ratio of maximum to minimum over all processors<br>
> Mess: number of messages sent<br>
> Avg. len: average message length (bytes)<br>
> Reduct: number of global reductions<br>
> Global: entire computation<br>
> Stage: stages of a computation. Set stages with PetscLogStagePush() and PetscLogStagePop().<br>
> %T - percent time in this phase %F - percent flops in this phase<br>
> %M - percent messages in this phase %L - percent message lengths in this phase<br>
> %R - percent reductions in this phase<br>
> Total Mflop/s: 10e-6 * (sum of flops over all processors)/(max time over all processors)<br>
> ------------------------------------------------------------------------------------------------------------------------<br>
> Event Count Time (sec) Flops --- Global --- --- Stage --- Total<br>
> Max Ratio Max Ratio Max Ratio Mess Avg len Reduct %T %F %M %L %R %T %F %M %L %R Mflop/s<br>
> ------------------------------------------------------------------------------------------------------------------------<br>
><br>
> --- Event Stage 0: Main Stage<br>
><br>
> KSPGMRESOrthog 21 1.0 8.8868e-03 1.0 3.33e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3752<br>
> KSPSetUp 5 1.0 4.3986e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> KSPSolve 1 1.0 1.0995e+01 1.0 1.76e+10 1.0 0.0e+00 0.0e+00 0.0e+00100100 0 0 0 100100 0 0 0 1596<br>
> VecMDot 21 1.0 4.7335e-03 1.0 1.67e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3522<br>
> VecNorm 30 1.0 9.4804e-04 1.0 4.63e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 4887<br>
> VecScale 29 1.0 7.8293e-04 1.0 2.20e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2809<br>
> VecCopy 14 1.0 7.7058e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> VecSet 102 1.0 1.4530e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> VecAXPY 9 1.0 3.8154e-04 1.0 9.05e+05 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 2372<br>
> VecAYPX 48 1.0 5.6449e-03 1.0 7.06e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1251<br>
> VecAXPBYCZ 24 1.0 4.0700e-03 1.0 1.41e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3469<br>
> VecMAXPY 29 1.0 5.1512e-03 1.0 2.04e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3960<br>
> VecAssemblyBegin 1 1.0 6.7055e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> VecAssemblyEnd 1 1.0 8.1025e-08 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> VecPointwiseMult 11 1.0 1.8083e-03 1.0 1.29e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 716<br>
> VecSetRandom 1 1.0 1.7628e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> VecNormalize 29 1.0 1.7100e-03 1.0 6.60e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 3858<br>
> MatMult 58 1.0 5.0949e-02 1.0 8.39e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1647<br>
> MatMultAdd 6 1.0 5.2584e-03 1.0 5.01e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 952<br>
> MatMultTranspose 6 1.0 6.1330e-03 1.0 5.01e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 816<br>
> MatSolve 12 1.0 2.0657e-01 1.0 4.37e+08 1.0 0.0e+00 0.0e+00 0.0e+00 2 2 0 0 0 2 2 0 0 0 2117<br>
> MatSOR 36 1.0 7.1355e-02 1.0 5.84e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 818<br>
> MatLUFactorSym 1 1.0 3.4310e-01 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 3 0 0 0 0 3 0 0 0 0 0<br>
> MatLUFactorNum 1 1.0 9.8038e+00 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 89 96 0 0 0 89 96 0 0 0 1721<br>
> MatConvert 1 1.0 5.6955e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatScale 3 1.0 2.7223e-03 1.0 2.45e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 901<br>
> MatResidual 6 1.0 6.2142e-03 1.0 9.71e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 1562<br>
> MatAssemblyBegin 12 1.0 2.7413e-06 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatAssemblyEnd 12 1.0 2.4857e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatGetRow 470596 1.0 2.4337e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatGetRowIJ 1 1.0 2.3254e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatGetOrdering 1 1.0 1.7668e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatCoarsen 1 1.0 8.5790e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatView 5 1.0 2.2273e-04 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatAXPY 1 1.0 1.8864e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatMatMult 1 1.0 2.4513e-02 1.0 2.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 83<br>
> MatMatMultSym 1 1.0 1.7885e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatMatMultNum 1 1.0 6.6144e-03 1.0 2.03e+06 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 307<br>
> MatPtAP 1 1.0 1.1460e-01 1.0 1.30e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 114<br>
> MatPtAPSymbolic 1 1.0 4.6803e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> MatPtAPNumeric 1 1.0 6.7781e-02 1.0 1.30e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 192<br>
> MatTrnMatMult 1 1.0 9.1702e-02 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 111<br>
> MatTrnMatMultSym 1 1.0 6.0173e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 0<br>
> MatTrnMatMultNum 1 1.0 3.1526e-02 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 324<br>
> MatGetSymTrans 2 1.0 4.2753e-03 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> PCGAMGgraph_AGG 1 1.0 6.9175e-02 1.0 1.62e+06 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 23<br>
> PCGAMGcoarse_AGG 1 1.0 1.1130e-01 1.0 1.02e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 92<br>
> PCGAMGProl_AGG 1 1.0 2.9380e-02 1.0 0.00e+00 0.0 0.0e+00 0.0e+00 0.0e+00 0 0 0 0 0 0 0 0 0 0 0<br>
> PCGAMGPOpt_AGG 1 1.0 9.1377e-02 1.0 5.15e+07 1.0 0.0e+00 0.0e+00 0.0e+00 1 0 0 0 0 1 0 0 0 0 564<br>
> PCSetUp 2 1.0 1.0587e+01 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 96 97 0 0 0 96 97 0 0 0 1601<br>
> PCSetUpOnBlocks 6 1.0 1.0165e+01 1.0 1.69e+10 1.0 0.0e+00 0.0e+00 0.0e+00 92 96 0 0 0 92 96 0 0 0 1660<br>
> PCApply 6 1.0 1.0503e+01 1.0 1.75e+10 1.0 0.0e+00 0.0e+00 0.0e+00 95 99 0 0 0 95 99 0 0 0 1662<br>
> ------------------------------------------------------------------------------------------------------------------------<br>
><br>
><br>
><br>
<br>
</blockquote></div><br></div></div></div>