[petsc-users] Offloading linear solves in time stepper to GPU
Harshad Sahasrabudhe
hsahasra at purdue.edu
Sat May 30 22:14:19 CDT 2015
>
> Is your intent to solve a problem that matters in a way that makes sense
> for a scientist or engineer
I want to see if we can speed up the time stepper for a large system using
GPUs. For large systems with sparse matrix of size 420,000^2, each time
step takes 341 sec on a single process and 180 seconds on 16 processes. So
the scaling isn't that good. We also run out of memory with more number of
processes.
On Sat, May 30, 2015 at 11:01 PM, Jed Brown <jed at jedbrown.org> wrote:
> Harshad Sahasrabudhe <hsahasra at purdue.edu> writes:
> > For now, I want to serialize the matrices and vectors and offload them
> to 1
> > GPU from the root process. Then distribute the result later.
>
> Unless you have experience with these solvers and the overheads
> involved, I think you should expect this to be much slower than simply
> doing the solves using a reasonable method in the CPU. Is your intent
> to solve a problem that matters in a way that makes sense for a
> scientist or engineer, or is it to demonstrate that a particular
> combination of packages/methods/hardware can be used?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150530/81782a72/attachment-0001.html>
More information about the petsc-users
mailing list