[petsc-users] Offloading linear solves in time stepper to GPU

Harshad Sahasrabudhe hsahasra at purdue.edu
Sat May 30 22:33:02 CDT 2015


>
>  Surely you're familiar with this.


Yes, I'm familiar with this. We are running on Intel Xeon E5 processor. It
has enough bandwidth and performance. Also, we are just running on one node
currently.

Is the poor scaling due to increased iteration count?  What method are you
> using?


This is exactly why we have poor scaling. We have tried KSPGMRES.

This sounds like a problem with your code (non-scalable data structure).


We need to work on the algorithm for matrix assembly. In it's current
state, one CPU ends up doing much of the work.This could be the cause of
bad memory scaling. This doesn't contribute to the bad scaling to time
stepping, time taken for time stepping is counted separately from assembly.

How long does it take to solve that system stand-alone using MAGMA, including
> the data transfers?


I'm still working on these tests.

On Sat, May 30, 2015 at 11:22 PM, Jed Brown <jed at jedbrown.org> wrote:

> Harshad Sahasrabudhe <hsahasra at purdue.edu> writes:
>
> >>
> >> Is your intent to solve a problem that matters in a way that makes sense
> >> for a scientist or engineer
> >
> >
> > I want to see if we can speed up the time stepper for a large system
> using
> > GPUs. For large systems with sparse matrix of size 420,000^2, each time
> > step takes 341 sec on a single process and 180 seconds on 16 processes.
> So
> > the scaling isn't that good.
>
>  Surely you're familiar with this.
>
> http://www.mcs.anl.gov/petsc/documentation/faq.html#computers
>
> Is the poor scaling due to increased iteration count?  What method are
> you using?
>
> > We also run out of memory with more number of processes.
>
> This sounds like a problem with your code (non-scalable data structure).
>
> Also, the GPU doesn't have more memory than the CPU.
>
> How long does it take to solve that system stand-alone using MAGMA,
> including the data transfers?
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-users/attachments/20150530/d75644c8/attachment.html>


More information about the petsc-users mailing list