[petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes
Barry Smith
bsmith at mcs.anl.gov
Thu Jul 7 19:49:55 CDT 2016
> On Jul 7, 2016, at 7:06 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
>
>
>
> On Thu, Jul 7, 2016 at 4:34 PM, Richard Mills <richardtmills at gmail.com> wrote:
> On Fri, Jul 1, 2016 at 4:13 PM, Jeff Hammond <jeff.science at gmail.com> wrote:
> [...]
>
> Maybe I am just biased because I spend all of my time reading www.nextplatform.com, but I hear machine learning is becoming an important HPC workload. While the most hyped efforts related to running inaccurate - the technical term is half-precision - dense matrix multiplication as fast as possible, I suspect that more elegant approaches will prevail. Presumably there is something that PETSc can do to enable machine learning algorithms. As most of the existing approaches use silly programming models based on MapReduce, it can't be too hard for PETSc to do better.
>
> "Machine learning" is definitely the hype du jour, but when that term gets thrown around, everyone is equating it with neural networks with a lot of layers ("deep learning"). That's why everyone is going on about half precision dense matrix multiplication, as low accuracy works fine for some of this stuff. The thing is, there are a a ton of machine-learning approaches out there that are NOT neural networks, and I worry that everyone is too ready to jump into specialized hardware for neural nets when maybe there are better approaches out there. Regarding machine learning approaches that use sparse matrix methods, I think that PETSc (plus SLEPc) provide pretty good building blocks right now for these, though there are probably things that could be better supported. But what machine learning approaches PETSc should target right now, I don't know. Program managers currently like terms like "neuromorphic computing"
It may be as much or even more idiots who talk to program managers that like "neuromorphic computing".
> and half-precision computations seem to be the focus. (Though why stop there? Why not quarter precision?!!)
>
>
> Google TPU does quarter precision i.e. 8-bit fixed-point [http://www.nextplatform.com/2016/05/19/google-takes-unconventional-route-homegrown-machine-learning-chips/], so the machine learning folks have already gone there. No need to speculate about it :-)
>
> Jeff
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
More information about the petsc-dev
mailing list