It’s been an interesting year in which I’ve been exposed to far more neuroscience than ever before. What I’ve learnt, plus other news I’ve absorbed during the year, has helped to clarify my thinking on the future of AI. First, let’s begin with computer power. I recently gave a talk at the Gatsby Unit on the singularity in which I used the following graph showing the estimated LINPACK scores of the fastest computers over the last 50 years:
[Top supercomputer LINPACK performance in FLOPS, 1960–602020]
…First observation: just like the people who told me in 1990 that exponential growth in supercomputer power couldn’t continue for another decade, the people who told me this in 2000 were again completely wrong. Ha ha, told you so! So let me make another prediction: for the next decade this pattern will once again roughly hold, taking us to about 1018 FLOPS by 2020.
…Third observation: it looks like we’re heading towards 1020 FLOPS before 2030, even if things slow down a bit from 2020 onwards…Desktop performance is also continuing this trend. I recently saw that a PC with just 2 high end graphics cards is around 1013 FLOPS of SGEMM performance. I also read a paper recently showing that less powerful versions of these cards lead to around 100× performance increases over CPU computation when learning large deep belief networks.
…Conclusion: computer power is unlikely to be the issue anymore in terms of AGI being possible. The main question is whether we can find the right algorithms. Of course, with more computer power we have a more powerful tool with which to hunt for the right algorithms and it also allows any algorithms we find to be less efficient. Thus growth in computer power will continue to be an important factor.
Having dealt with computation, now we get to the algorithm side of things. One of the big things influencing me this year has been learning about how much we understand about how the brain works, in particular, how much we know that should be of interest to AGI designers. I won’t get into it all here, but suffice to say that just a brief outline of all this information would be a 20 page journal paper (there is currently a suggestion that I write such a paper next year with some Gatsby Unit neuroscientists, but for the time being I’ve got too many other things to attend to). At a high level what we are seeing in the brain is a fairly sensible looking AGI design. You’ve got hierarchical temporal abstraction formed for perception and action combined with more precise timing motor control, with an underlying system for reinforcement learning. The reinforcement learning system is essentially a type of temporal difference learning though unfortunately at the moment there is evidence in favour of actor-critic, Q-learning and also SARSA type mechanisms—this picture should clear up in the next year or so. The system contains a long list of features that you might expect to see in a sophisticated reinforcement learner such as pseudo-rewards for informative queues, inverse reward computations, uncertainty and environmental change modeling, dual model based and model free modes of operation, things to monitor context, it even seems to have mechanisms that reward the development of conceptual knowledge. When I ask leading experts in the field whether we will understand reinforcement learning in the human brain within ten years, the answer I get back is “yes, in fact we already have a pretty good idea how it works and our knowledge is developing rapidly.”
The really tough nut to crack will be how the cortical system works…Thus I suspect that for the next 5 years, and probably longer, neuroscientists working on understanding cortex aren’t going to be of much use to AGI efforts. My guess is that sometime in the next 10 years developments in deep belief networks, temporal graphical models, liquid computation models, slow feature analysis etc. will produce sufficiently powerful hierarchical temporal generative models to essentially fill the role of cortex within an AGI.
Right, so my prediction for the last 10 years has been for roughly human level AGI in the year 2025 (though I also predict that sceptics will deny that it’s happened when it does!).
[See his Halloween scenario: And what rough beast, its hour come round at last, / slouches towards Bethlehem to be born?]