[petsc-dev] A more positive thread

Paul Mullowney paulm at txcorp.com
Fri Feb 10 10:11:24 CST 2012


Congrats! Given that I have a DoE phase II to continue Petsc GPU 
development, we should probably all get on the same page.

If you're looking to do multi-GPU computing, I've already effectively 
redesigned the MatMult to get very good strong scaling and some 
excellent performance.
I have some matrices getting 40+ GFlops on 4 GPUs. 20 GFlops on 2 GPUs, ...

-Paul

> Pramod (who did the work) was my student at EPCC last year. We looked at porting Fluidity (or bits of it) to GPU and as a result decided that working on PETSc's GPU support was going to be the most promising angle for the time we had available. We have never got round to benchmarking Fluidity with Pramod's extended sparse matrix format support, but it shouldn't be too much effort.
>
> If you have any questions about it all, feel free to ask.
>
> cheers,
> Michele
> On 10 Feb 2012, at 15:57, Matthew Knepley wrote:
>
>> Has anyone taken a look at this:
>>
>>    http://www.epcc.ed.ac.uk/wp-content/uploads/2011/11/PramodKumbhar.pdf
>>
>> It is the Fluidity people I think. I have just gotten an NSF award with Dan Negrut
>> in Wisconsin and Ahmed Sameh at Purdue to port Ahmed's SPIKE preconditioner
>> (you may have heard Olaf Schenk talk about this) to PETSc, and in particular to
>> PETSc's GPU backend so that Dan can run it on his GPU cluster. Thus, we will
>> be seriously stressing this feature in the near future.
>>
>>      Matt
>>
>> -- 
>> What most experimenters take for granted before they begin their experiments is infinitely more interesting than any results to which their experiments lead.
>> -- Norbert Wiener




More information about the petsc-dev mailing list