[petsc-users] Iterative solvers, MPI+GPU
Jed Brown
jed at jedbrown.org
Tue Feb 11 15:45:57 CST 2020
The short answer is yes, this works great, and your vectors never need
to leave the GPU (except via send/receive buffers that can hit the
network directly with GPU-aware MPI). If you have a shell
preconditioner, you're all set. If you want to use PETSc
preconditioners, we have some that run on GPUs, but not all are
well-suited to GPU architectures, and there is ongoing work to improve
performance for some important methods, such as algebraic multigrid (for
which setup is harder than the solve).
"McDaniel, Tyler via petsc-users" <petsc-users at mcs.anl.gov> writes:
> Hello,
>
>
> Our team at Oak Ridge National Lab requires a distributed and GPU-enabled (ideally) iterative solver as part of a new, high-dimensional PDE solver. We are exploring options for software packages with this capability vs. rolling our own, i.e. having some of our team members write one.
>
>
> Our code already has a distributed, GPU-enabled matrix-vector multiply that we'd like to use for the core of GMRES or similar technique. I've looked through the PETSc API and found that matrix-free methods are supported, and this: https://www.mcs.anl.gov/petsc/features/gpus.html seems to indicate that GPU acceleration is available for iterative solvers.
>
>
> My question is: does PETSc support all of these things together? E.g. is it possible for me to use a distributed, matrix free iterative solver with a preconditioner shell on the GPU with PETSc?
>
>
> Best,
>
> Tyler McDaniel
More information about the petsc-users
mailing list