<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Apr 7, 2017 at 3:52 PM, Barry Smith <span dir="ltr"><<a target="_blank" href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>></span> wrote:<br><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote"><span class="gmail-"><br>
> On Apr 7, 2017, at 4:46 PM, Kong, Fande <<a href="mailto:fande.kong@inl.gov">fande.kong@inl.gov</a>> wrote:<br>
><br>
><br>
><br>
> On Fri, Apr 7, 2017 at 3:39 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
><br>
> Using Petsc Release Version 3.7.5, unknown<br>
><br>
> So are you using the release or are you using master branch?<br>
><br>
> I am working on the maint branch.<br>
><br>
> I did something two months ago:<br>
><br>
</span>> git clone -b maint <a target="_blank" rel="noreferrer" href="https://urldefense.proofpoint.com/v2/url?u=https-3A__bitbucket.org_petsc_petsc&d=DwIFAg&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=DUUt3SRGI0_JgtNaS3udV68GRkgV4ts7XKfj2opmiCY&m=c92UNplDTVgzFrXIn_70buWa2rXPGUKN083_aJYI0FQ&s=yrulwZxJiduZc-703r7PJOUApPDehsFIkhS0BTrroXc&e=">https://urldefense.proofpoint.<wbr>com/v2/url?u=https-3A__<wbr>bitbucket.org_petsc_petsc&d=<wbr>DwIFAg&c=<wbr>54IZrppPQZKX9mLzcGdPfFD1hxrcB_<wbr>_aEkJFOKJFd00&r=DUUt3SRGI0_<wbr>JgtNaS3udV68GRkgV4ts7XKfj2opmi<wbr>CY&m=c92UNplDTVgzFrXIn_<wbr>70buWa2rXPGUKN083_aJYI0FQ&s=<wbr>yrulwZxJiduZc-<wbr>703r7PJOUApPDehsFIkhS0BTrroXc&<wbr>e=</a> petsc.<br>
<span class="gmail-">><br>
><br>
> I am interested to improve the GAMG performance.<br>
<br>
</span> Why, why not use the best solver for your problem?<br></blockquote><div><br></div><div>I am just curious. I want to understand the potential of interesting preconditioners. <br></div><div><br> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
<span class="gmail-"><br>
> Is it possible? It can not beat ASM at all? The multilevel method should be better than the one-level if the number of processor cores is large.<br>
<br>
</span> The ASM is taking 30 iterations, this is fantastic, it is really going to be tough to get GAMG to be faster (set up time for GAMG is high).<br>
<br>
What happens to both with 10 times as many processes? 100 times as many?<br></blockquote><div><br><br></div><div>Did not try many processes yet.<br><br></div><div>Fande,<br></div><div><br> </div><blockquote style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex" class="gmail_quote">
<span class="gmail-HOEnZb"><font color="#888888"><br>
<br>
Barry<br>
</font></span><div class="gmail-HOEnZb"><div class="gmail-h5"><br>
><br>
> Fande,<br>
><br>
><br>
> If you use master the ASM will be even faster.<br>
><br>
> What's new in master?<br>
><br>
><br>
> Fande,<br>
><br>
><br>
><br>
> > On Apr 7, 2017, at 4:29 PM, Kong, Fande <<a href="mailto:fande.kong@inl.gov">fande.kong@inl.gov</a>> wrote:<br>
> ><br>
> > Thanks, Barry.<br>
> ><br>
> > It works.<br>
> ><br>
> > GAMG is three times better than ASM in terms of the number of linear iterations, but it is five times slower than ASM. Any suggestions to improve the performance of GAMG? Log files are attached.<br>
> ><br>
> > Fande,<br>
> ><br>
> > On Thu, Apr 6, 2017 at 3:39 PM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> ><br>
> > > On Apr 6, 2017, at 9:39 AM, Kong, Fande <<a href="mailto:fande.kong@inl.gov">fande.kong@inl.gov</a>> wrote:<br>
> > ><br>
> > > Thanks, Mark and Barry,<br>
> > ><br>
> > > It works pretty wells in terms of the number of linear iterations (using "-pc_gamg_sym_graph true"), but it is horrible in the compute time. I am using the two-level method via "-pc_mg_levels 2". The reason why the compute time is larger than other preconditioning options is that a matrix free method is used in the fine level and in my particular problem the function evaluation is expensive.<br>
> > ><br>
> > > I am using "-snes_mf_operator 1" to turn on the Jacobian-free Newton, but I do not think I want to make the preconditioning part matrix-free. Do you guys know how to turn off the matrix-free method for GAMG?<br>
> ><br>
> > -pc_use_amat false<br>
> ><br>
> > ><br>
> > > Here is the detailed solver:<br>
> > ><br>
> > > SNES Object: 384 MPI processes<br>
> > > type: newtonls<br>
> > > maximum iterations=200, maximum function evaluations=10000<br>
> > > tolerances: relative=1e-08, absolute=1e-08, solution=1e-50<br>
> > > total number of linear solver iterations=20<br>
> > > total number of function evaluations=166<br>
> > > norm schedule ALWAYS<br>
> > > SNESLineSearch Object: 384 MPI processes<br>
> > > type: bt<br>
> > > interpolation: cubic<br>
> > > alpha=1.000000e-04<br>
> > > maxstep=1.000000e+08, minlambda=1.000000e-12<br>
> > > tolerances: relative=1.000000e-08, absolute=1.000000e-15, lambda=1.000000e-08<br>
> > > maximum iterations=40<br>
> > > KSP Object: 384 MPI processes<br>
> > > type: gmres<br>
> > > GMRES: restart=100, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> > > GMRES: happy breakdown tolerance 1e-30<br>
> > > maximum iterations=100, initial guess is zero<br>
> > > tolerances: relative=0.001, absolute=1e-50, divergence=10000.<br>
> > > right preconditioning<br>
> > > using UNPRECONDITIONED norm type for convergence test<br>
> > > PC Object: 384 MPI processes<br>
> > > type: gamg<br>
> > > MG: type is MULTIPLICATIVE, levels=2 cycles=v<br>
> > > Cycles per PCApply=1<br>
> > > Using Galerkin computed coarse grid matrices<br>
> > > GAMG specific options<br>
> > > Threshold for dropping small values from graph 0.<br>
> > > AGG specific options<br>
> > > Symmetric graph true<br>
> > > Coarse grid solver -- level ------------------------------<wbr>-<br>
> > > KSP Object: (mg_coarse_) 384 MPI processes<br>
> > > type: preonly<br>
> > > maximum iterations=10000, initial guess is zero<br>
> > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> > > left preconditioning<br>
> > > using NONE norm type for convergence test<br>
> > > PC Object: (mg_coarse_) 384 MPI processes<br>
> > > type: bjacobi<br>
> > > block Jacobi: number of blocks = 384<br>
> > > Local solve is same for all blocks, in the following KSP and PC objects:<br>
> > > KSP Object: (mg_coarse_sub_) 1 MPI processes<br>
> > > type: preonly<br>
> > > maximum iterations=1, initial guess is zero<br>
> > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> > > left preconditioning<br>
> > > using NONE norm type for convergence test<br>
> > > PC Object: (mg_coarse_sub_) 1 MPI processes<br>
> > > type: lu<br>
> > > LU: out-of-place factorization<br>
> > > tolerance for zero pivot 2.22045e-14<br>
> > > using diagonal shift on blocks to prevent zero pivot [INBLOCKS]<br>
> > > matrix ordering: nd<br>
> > > factor fill ratio given 5., needed 1.31367<br>
> > > Factored matrix follows:<br>
> > > Mat Object: 1 MPI processes<br>
> > > type: seqaij<br>
> > > rows=37, cols=37<br>
> > > package used to perform factorization: petsc<br>
> > > total: nonzeros=913, allocated nonzeros=913<br>
> > > total number of mallocs used during MatSetValues calls =0<br>
> > > not using I-node routines<br>
> > > linear system matrix = precond matrix:<br>
> > > Mat Object: 1 MPI processes<br>
> > > type: seqaij<br>
> > > rows=37, cols=37<br>
> > > total: nonzeros=695, allocated nonzeros=695<br>
> > > total number of mallocs used during MatSetValues calls =0<br>
> > > not using I-node routines<br>
> > > linear system matrix = precond matrix:<br>
> > > Mat Object: 384 MPI processes<br>
> > > type: mpiaij<br>
> > > rows=18145, cols=18145<br>
> > > total: nonzeros=1709115, allocated nonzeros=1709115<br>
> > > total number of mallocs used during MatSetValues calls =0<br>
> > > not using I-node (on process 0) routines<br>
> > > Down solver (pre-smoother) on level 1 ------------------------------<wbr>-<br>
> > > KSP Object: (mg_levels_1_) 384 MPI processes<br>
> > > type: chebyshev<br>
> > > Chebyshev: eigenvalue estimates: min = 0.133339, max = 1.46673<br>
> > > Chebyshev: eigenvalues estimated using gmres with translations [0. 0.1; 0. 1.1]<br>
> > > KSP Object: (mg_levels_1_esteig_) 384 MPI processes<br>
> > > type: gmres<br>
> > > GMRES: restart=30, using Classical (unmodified) Gram-Schmidt Orthogonalization with no iterative refinement<br>
> > > GMRES: happy breakdown tolerance 1e-30<br>
> > > maximum iterations=10, initial guess is zero<br>
> > > tolerances: relative=1e-12, absolute=1e-50, divergence=10000.<br>
> > > left preconditioning<br>
> > > using PRECONDITIONED norm type for convergence test<br>
> > > maximum iterations=2<br>
> > > tolerances: relative=1e-05, absolute=1e-50, divergence=10000.<br>
> > > left preconditioning<br>
> > > using nonzero initial guess<br>
> > > using NONE norm type for convergence test<br>
> > > PC Object: (mg_levels_1_) 384 MPI processes<br>
> > > type: sor<br>
> > > SOR: type = local_symmetric, iterations = 1, local iterations = 1, omega = 1.<br>
> > > linear system matrix followed by preconditioner matrix:<br>
> > > Mat Object: 384 MPI processes<br>
> > > type: mffd<br>
> > > rows=3020875, cols=3020875<br>
> > > Matrix-free approximation:<br>
> > > err=1.49012e-08 (relative error in function evaluation)<br>
> > > Using wp compute h routine<br>
> > > Does not compute normU<br>
> > > Mat Object: () 384 MPI processes<br>
> > > type: mpiaij<br>
> > > rows=3020875, cols=3020875<br>
> > > total: nonzeros=215671710, allocated nonzeros=241731750<br>
> > > total number of mallocs used during MatSetValues calls =0<br>
> > > not using I-node (on process 0) routines<br>
> > > Up solver (post-smoother) same as down solver (pre-smoother)<br>
> > > linear system matrix followed by preconditioner matrix:<br>
> > > Mat Object: 384 MPI processes<br>
> > > type: mffd<br>
> > > rows=3020875, cols=3020875<br>
> > > Matrix-free approximation:<br>
> > > err=1.49012e-08 (relative error in function evaluation)<br>
> > > Using wp compute h routine<br>
> > > Does not compute normU<br>
> > > Mat Object: () 384 MPI processes<br>
> > > type: mpiaij<br>
> > > rows=3020875, cols=3020875<br>
> > > total: nonzeros=215671710, allocated nonzeros=241731750<br>
> > > total number of mallocs used during MatSetValues calls =0<br>
> > > not using I-node (on process 0) routines<br>
> > ><br>
> > ><br>
> > > Fande,<br>
> > ><br>
> > > On Thu, Apr 6, 2017 at 8:27 AM, Mark Adams <<a href="mailto:mfadams@lbl.gov">mfadams@lbl.gov</a>> wrote:<br>
> > > On Tue, Apr 4, 2017 at 10:10 AM, Barry Smith <<a href="mailto:bsmith@mcs.anl.gov">bsmith@mcs.anl.gov</a>> wrote:<br>
> > > ><br>
> > > >> Does this mean that GAMG works for the symmetrical matrix only?<br>
> > > ><br>
> > > > No, it means that for non symmetric nonzero structure you need the extra flag. So use the extra flag. The reason we don't always use the flag is because it adds extra cost and isn't needed if the matrix already has a symmetric nonzero structure.<br>
> > ><br>
> > > BTW, if you have symmetric non-zero structure you can just set<br>
> > > -pc_gamg_threshold -1.0', note the "or" in the message.<br>
> > ><br>
> > > If you want to mess with the threshold then you need to use the<br>
> > > symmetrized flag.<br>
> > ><br>
> ><br>
> ><br>
> > <asm.txt><gamg.txt><br>
><br>
><br>
<br>
</div></div></blockquote></div><br></div></div>