[petsc-dev] Question About Petsc ILU

Dave Nystrom dnystrom1 at comcast.net
Thu Dec 29 11:45:24 CST 2011


Jed Brown writes:
 > On Thu, Dec 29, 2011 at 11:01, Dave Nystrom <dnystrom1 at comcast.net> wrote:
 > 
 > > I have recently added the capability to have a separate preconditioning
 > > matrix in the petsc interface for the code I am working with.  I have two
 > > types of preconditioning matrices that I have questions about.  One is
 > > tridiagonal and the other is 7 diagonals.  In both cases, the the diagonals
 > > are all lexically adjacent.  Or phrased differently, the tridiagonal matrix
 > > has a bandwidth of 3 and the 7 diagonal matrix has a bandwidth of 7 and so
 > > they are compact or dense band systems.
 > >
 > > I was wondering what petsc ilu will do for preconditioning matrices like
 > > these.  Will it produce an exact lu factorization or a nearly exact
 > > factorization?
 > 
 > Yes
 > 
 > > I'm interested in the answer to this question because I am thinking I
 > > might be able to run this preconditioner on the gpu using the txpetscgpu
 > > package.
 > 
 > Likely pointless because this solve is probably not a big part of run-time.
 > The bigger concern is the convergence rate that you lose by using this
 > approximation. Matt and I mentioned this the last time you brought it up,
 > but I recommend getting familiar with the literature. Then, get the math
 > straight before looking for ways to transform into problems that you can
 > run fast on a GPU or whatever. If you just optimize kernels, you're likely
 > to optimize something that takes a small part of run-time and isn't really
 > helping you anyway.

So the application code is one that I am not the primary author on and it is
a 2d resistive MHD with extensions code for magnetized fusion problems.
Magnetized fusion problems tend to be highly anisotropic with qualitatively
different physics parallel and perpendicular to the magnetic field.  The code
has 3 approaches to solving the linear systems.  One is a suite of native cg
solvers that use a cholesky solver for the inner band of the matrix as the
preconditioner.  As noted, this preconditioner ignores the coupling to the
second coordinate.  However, this preconditioner produces the lowest
iteration count of the various preconditioner approaches that I have tried.

The second solver approach is use of the agmg package which works well on all
of the linear solves except for the Hall matrix.  For the Hall matrix, agmg
fails to produce a solution.

The third approach is an interface to petsc that I have added in the last few
months.  So far, I have been able to get really good results with petsc on
all of the linear systems except for the Hall matrix.  In general, the
fastest solution on these other linear systems has been petsc using jacobi
preconditioning running on the gpu.  However, I think I may be able to do
even better on these other systems using petsc with the separate
preconditioning matrix and using ilu plus txpetscgpu.

But, having invested the time to interface to petsc, I am now trying to
explore the vast set of options available to me to see what works best for my
problems from the petsc toolbox.  But solving the Hall matrix, especially for
larger mesh sizes, still remains problematic.  The lead author of this code
is investigating the Hall matrix formulation to see if there are better ways
to either formulate it or solve it and I am exploring what petsc has to offer
for solving it.  The matrix is very ill conditioned.  The formulation is
complicated enough that perhaps there is just a bug somewhere.  But these
sort of questions are being investigated by the main author of the code.

Anyway, I have so far gotten good performance gains using the gpu with petsc
but am totally open to and am trying to explore the whole petsc toolbox.



More information about the petsc-dev mailing list