[petsc-dev] Integrating PFLOTRAN, PETSC & SAMRAI
Boyce Griffith
griffith at cims.nyu.edu
Tue Jun 7 14:12:39 CDT 2011
On 6/7/11 2:32 PM, Jed Brown wrote:
> On Tue, Jun 7, 2011 at 20:17, Boyce Griffith <griffith at cims.nyu.edu
> <mailto:griffith at cims.nyu.edu>> wrote:
>
> At one point, I tried implementing the local part of the matrix-free
> matrices use PETSc Mat's, but performance was modestly lower than a
> pure Fortran implementation, and so I abandoned this. This was for
> a cell-centered Poisson problem using a 5/7-point finite-volume
> stencil. I think this even held in the case where there was only one
> patch per processor --- i.e., contiguous local storage --- although
> I may be misremembering.
>
> What do you mean? Matrix-free _should_ be much faster in cases like this
> because there's much less memory to deal with. The assembled matrix is
> useful for other things (like preconditioning), but it's a slow way to
> do MatMult(). The tradeoffs are a bit different for different
> discretizations, assembled matrices are good for lowest-order FEM, or
> (often) if you actually solve Riemann problems.
In fact, it was the tests of assembled-versus-unassembled performance
that you described on the libMesh list that led to me trying this out in
the first place. My (possibly faulty) reasoning was that if there could
be some performance advantage to using assembled matrices for low order
FE discretizations, then perhaps there also might be some for low order
FD/FV discretizations. At least in what I was doing, there did not
appear to be any benefit to doing this, although matrix free is not tons
faster either.
-- Boyce
More information about the petsc-dev
mailing list