[petsc-users] solving multiple linear systems with same matrix (sequentially, not simultaneously)
Barry Smith
bsmith at mcs.anl.gov
Sun Feb 22 12:30:34 CST 2015
> On Feb 22, 2015, at 12:14 PM, Daniel Goldberg <dngoldberg at gmail.com> wrote:
>
> Hi Barry
>
> Thank you for the reply. From a test I did with a linear system of about 400 000 on 128 cores, i clocked that the setup takes about 2% of the time of the solve
This is because the preconditioner is very fast to build. GAMG takes longer to build but will give much faster convergence for each solve.
> -- but it will be a larger fraction with smaller systems.
>
> If you mean the differential operator -- it is elliptic. The equations are based on those of a 3D shear-thinning stokes fluid, but simplified by various approximations including small aspect ratio, so the equations are essentially two-dimensional. (sometimes called the Shallow Shelf Approximation and similar to the equations solved by http://onlinelibrary.wiley.com/doi/10.1029/2008JF001179/abstract). The linear system I solve is a step in an iterative solution to the nonlinear equations.
>
> I will try gamg as I know it can reduce the number of CG iterations required. (I'm guessing you mean algebraic, not geometric?)
By default GAMG is an algebraic multigrid preconditioner. Look at its documentation at http://www.mcs.anl.gov/petsc/petsc-master/docs/index.html it will be a bit better than the older documentation. The documentation for GAMG is still pretty thin so feel free to ask questions.
Start with -pc_type gamg -ksp_monitor -ksp_type cg -ksp_norm_type unpreconditioned
>
> Am I correct in thinking that, for subsequent solves with the same KSP object, the time of the solve should scale with the number of conj grad iterations?
Yes
> Will this time be relatively independent of the preconditioner type?
No, it will be much faster for some preconditioners than others.
Barry
>
> Thanks
> Dan
>
> On Sun, Feb 22, 2015 at 5:54 PM, Barry Smith <bsmith at mcs.anl.gov> wrote:
>
> You should definitely keep the matrix, PC, and KSP for all 40 of the solves. This will eliminate the matrix and preconditioner setup time for last 39 solves.
>
> What kind of operator do you have? ILU(k) is a general purpose preconditioner but not particularly good. I would recommend trying -pc_type gamg next.
>
> Barry
>
>
> > On Feb 22, 2015, at 11:00 AM, Daniel Goldberg <dngoldberg at gmail.com> wrote:
> >
> > Hello
> >
> > In the code I am developing, there is a point where I will need to solve a linear system with the same matrix but different right hand sides. It is a time-dependent model and the matrix in question will change each timestep, but for a single timestep ~30-40 linear solves will be required, with the same matrix but differing right hand sides. Each right hand side depends on the previous, so they cannot all be solved simultaneously.
> >
> > Generally I solve the matrix via CG with Block Jacobi / ILU(0) preconditioning. I don't persist the matrix in between solves (i.e. I destroy the Mat, KSP and Vec objects after each linear solve) because I do not get the feeling this makes a large difference to performance and it is easier in dealing with the rest of the model code, which does not use PETSc. The matrix is positive definite with sparsity of 18 nonzero elements per row, and generally the linear system is smaller than 1 million degrees of freedom.
> >
> > If my matrix was dense, then likely retaining LU factors for foward/backward substitution would be a good strategy (though I'm not sure if this is easily doable using PETSc direct solvers). But given the sparsity I am unsure of whether I can take advantage of using the same matrix 30-40 times in a row.
> >
> > The comments in ex16.c state that when a KSP object is used multiple times for a linear solve, the preconditioner operations are not done each time -- and so I figured that if I changed my preconditioning parameters I might be able to make subsequent solves with the same KSP object faster. I tried an experment in which I increased pc_factor_levels, and then created a number of random vectors, e.g. vec_1, vec_2, vec_3... and called
> >
> > KSPSolve(ksp, vec_i, solution_i)
> >
> > sequentially with the same ksp object. I did see a decrease in the number of CG iterations required as pc_factor_levels was increased, as well as an increase in time for the first linear solve, but no decrease in time required for subsequent solves. *Should* there be a timing decrease? But more generally is there a way to optimize the fact I'm solving 40 linear systems with the same matrix?
> >
> > Apologies for not providing a copy of the relevant bits of code -- this is my first email to the group, and a rather long one already -- happy to be more concrete about anything I've said above.
> >
> > Thanks
> > Dan
> >
> > --
> >
> > Daniel Goldberg, PhD
> > Lecturer in Glaciology
> > School of Geosciences, University of Edinburgh
> > Geography Building, Drummond Street, Edinburgh EH8 9XP
> >
> >
> > em: Dan.Goldberg at ed.ac.uk
> > web: http://ocean.mit.edu/~dgoldberg
>
>
>
>
> --
>
> Daniel Goldberg, PhD
> Lecturer in Glaciology
> School of Geosciences, University of Edinburgh
> Geography Building, Drummond Street, Edinburgh EH8 9XP
>
>
> em: Dan.Goldberg at ed.ac.uk
> web: http://ocean.mit.edu/~dgoldberg
More information about the petsc-users
mailing list