[petsc-dev] Soliciting suggestions for linear solver work under SciDAC 4 Institutes

Richard Mills richardtmills at gmail.com
Thu Jul 7 20:22:06 CDT 2016


On Thu, Jul 7, 2016 at 5:06 PM, Jeff Hammond <jeff.science at gmail.com> wrote:

>
>
> On Thu, Jul 7, 2016 at 4:34 PM, Richard Mills <richardtmills at gmail.com>
> wrote:
>
>> On Fri, Jul 1, 2016 at 4:13 PM, Jeff Hammond <jeff.science at gmail.com>
>> wrote:
>>
>>> [...]
>>>
>>> Maybe I am just biased because I spend all of my time reading
>>> www.nextplatform.com, but I hear machine learning is becoming an
>>> important HPC workload.  While the most hyped efforts related to running
>>> inaccurate - the technical term is half-precision - dense matrix
>>> multiplication as fast as possible, I suspect that more elegant approaches
>>> will prevail.  Presumably there is something that PETSc can do to enable
>>> machine learning algorithms.  As most of the existing approaches use silly
>>> programming models based on MapReduce, it can't be too hard for PETSc to do
>>> better.
>>>
>>
>> "Machine learning" is definitely the hype du jour, but when that term
>> gets thrown around, everyone is equating it with neural networks with a lot
>> of layers ("deep learning").  That's why everyone is going on about half
>> precision dense matrix multiplication, as low accuracy works fine for some
>> of this stuff.  The thing is, there are a a ton of machine-learning
>> approaches out there that are NOT neural networks, and I worry that
>> everyone is too ready to jump into specialized hardware for neural nets
>> when maybe there are better approaches out there.  Regarding machine
>> learning approaches that use sparse matrix methods, I think that PETSc
>> (plus SLEPc) provide pretty good building blocks right now for these,
>> though there are probably things that could be better supported.  But what
>> machine learning approaches PETSc should target right now, I don't know.
>> Program managers currently like terms like "neuromorphic computing" and
>> half-precision computations seem to be the focus.  (Though why stop there?
>> Why not quarter precision?!!)
>>
>>
> Google TPU does quarter precision i.e. 8-bit fixed-point [
> http://www.nextplatform.com/2016/05/19/google-takes-unconventional-route-homegrown-machine-learning-chips/],
> so the machine learning folks have already gone there.  No need to
> speculate about it :-)
>

How wonderfully retro!  I remember doing stuff like this for 3D graphics,
back in the day when floating point was way too expensive, so we had to do
it with fixed point calculations.  I guess I'm getting pretty old in
computing years...

--Richard


>
> Jeff
>
> --
> Jeff Hammond
> jeff.science at gmail.com
> http://jeffhammond.github.io/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.mcs.anl.gov/pipermail/petsc-dev/attachments/20160707/8359161d/attachment.html>


More information about the petsc-dev mailing list