[petsc-users] Scaling/Preconditioners for Poisson equation

Matthew Knepley knepley at gmail.com
Mon Sep 29 15:51:31 CDT 2014


On Mon, Sep 29, 2014 at 3:47 PM, Filippo Leonardi <
filippo.leonardi at sam.math.ethz.ch> wrote:

> @ Barry: It may be that I forgot to set the number of levels for the runs.
>
> New experiment with the following options:
>
> -da_refine 5 -pc_type mg -ksp_monitor -log_summary -pc_mg_type full
> -ksp_view -
> pc_mg_log -pc_mg_levels 5 -pc_mg_galerkin -ksp_monitor_true_residual -
> ksp_converged_reason
>
> on 128^3, and it looks nice:
>
>   0 KSP Residual norm 5.584601494955e+01
>   0 KSP preconditioned resid norm 5.584601494955e+01 true resid norm
> 1.370259979011e+01 ||r(i)||/||b|| 1.000000000000e+00
>   1 KSP Residual norm 9.235021247277e+00
>   1 KSP preconditioned resid norm 9.235021247277e+00 true resid norm
> 8.185195475443e-01 ||r(i)||/||b|| 5.973461679404e-02
>   2 KSP Residual norm 6.344253555076e-02
>   2 KSP preconditioned resid norm 6.344253555076e-02 true resid norm
> 1.108015805992e-01 ||r(i)||/||b|| 8.086172134956e-03
>   3 KSP Residual norm 1.084530268454e-03
>   3 KSP preconditioned resid norm 1.084530268454e-03 true resid norm
> 3.228589340041e-03 ||r(i)||/||b|| 2.356187431214e-04
>   4 KSP Residual norm 2.345341850850e-05
>   4 KSP preconditioned resid norm 2.345341850850e-05 true resid norm
> 9.362117433445e-05 ||r(i)||/||b|| 6.832365811489e-06
> Linear solve converged due to CONVERGED_RTOL iterations 4
>
> I'll try on more processors. Is it correct what I am doing? If so, is there
> some tweak I am missing.
>
> Any suggestion on optimal number of levels VS number of processors?
>
> Btw, thanks a lot, you are always so helpful.
>
> On Monday 29 September 2014 14:59:49 you wrote:
> > Filippo Leonardi <filippo.leonardi at sam.math.ethz.ch> writes:
> > > Thank you.
> > >
> > > Actually I had the feeling that it wasn't my problem with Bjacobi and
> CG.
> > >
> > > So I'll stick to MG. Problem with MG is that there are a lot of
> parameters
> > > to be tuned, so I leave the defaults (expect I select CG as Krylow
> > > method). I post just results for 64^3 and 128^3. Tell me if I'm missing
> > > some useful detail. (I get similar results with BoomerAMG).
> > >
> > > Time for one KSP iteration (-ksp_type cg -log_summary -pc_mg_galerkin
> > > -pc_type mg):
> > > 32^3 and 1 proc: 1.01e-1
> > > 64^3 and 8 proc: 6.56e-01
> > > 128^3 and 64 proc: 1.05e+00
> > > Number of PCSetup per KSPSolve:
> > > 15
> > > 39
> > > 65
> >
> > Presumably you mean PCApply.  Something is wrong here because this
> > iteration count is way too high.  Perhaps your boundary conditions are
> > nonsymmetric or interpolation is not compatible with the discretization.
> >
> > > With BoomerAMG:
> > > stable 8 iterations per KSP but time per iteration greater than PETSc
> MG
> > > and still increases:
> > > 64^3:  3.17e+00
> > > 128^3: 9.99e+00
> >
> > > --> For instance with 64^3 (256 iterations):
> > In the first pass with geometric multigrid, don't worry about timing and
> > get the iterations figured out.  Are you using a cell-centered or
> > vertex-centered discretization.  When you say 128^3, is that counting
> > the number of elements or the number of vertices?  Note that if you have
> > a vertex-centered discretization, you will want a 129^3 grid.
>
> Cell-centered, counting elements.
>
> > With
> > PCMG, make sure you are getting the number of levels of refinement that
> > you expect.
>
>
>
> >
> > You should see something like the following (this is 193^3).
> >
> > $ mpiexec -n 4 ./ex45 -da_refine 5 -pc_type mg -ksp_monitor -pc_mg_type
> full
> > -mg_levels_ksp_type richardson -mg_levels_pc_type sor -ksp_type
> richardson
> > 0 KSP Residual norm 2.653722249919e+03
> >   1 KSP Residual norm 1.019366121923e+02
> >   2 KSP Residual norm 2.364558296616e-01
> >   3 KSP Residual norm 7.438761746501e-04
> > Residual norm 1.47939e-06
>
> >
> > You can actually do better than this by using higher order FMG
> > interpolation, by going matrix-free, etc.  For example, HPGMG
> > (finite-element or finite-volume, see https://hpgmg.org) will solve more
> > than a million equations/second per core.  Is your application really
> > solving the constant-coefficient Poisson problem on a Cartesian grid, or
> > is that just a test?
>
> I actually just need a cell centered Poisson solver on cartesian grids.
> (various boundary conditions).
>
> Matrix free you mean AMG (like -pc_mg_galerkin)? Does it reach the same
> scalability as GMG?


No, Jed means calculating the action of the operator matrix-free instead of
using
an assembled sparse matrix.


>
> >
> > > Using Petsc Release Version 3.3.0, Patch 3, Wed Aug 29 11:26:24 CDT
> 2012
> >
> > And a reminder to please upgrade to the current version of PETSc.
>
> Sadly this is not in my power. I had actually had to rollback all the APIs
> to
> be able to do this test runs.


   Matt

-- 
What most experimenters take for granted before they begin their
experiments is infinitely more interesting than any results to which their
experiments lead.
-- Norbert Wiener
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20140929/9f893d9d/attachment.html>


More information about the petsc-users mailing list