“Halloween Nightmare Scenario, Early 2020’s”, 2009-11-02 (; backlinks):
On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” [slides, blog] to an audience of 50 people at the UK Humanity+ meeting in Birkbeck College.
…The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines”.
As stated in slide 38, on brain reinforcement learning (RL):
This area of research is currently progressing very quickly.
New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.
I’ve asked a number of researchers in this area:
- “Will we have a good understanding of the RL system in the brain before 2020?”
Typical answer:
- “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”
Adding up these 3 assumptions, the first conclusion is:
Many research groups will be working on brain-like AGI architectures
The second conclusion is that, inevitably:
Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time—which will, by then, be exaflop.
But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being—with results that might be completely unpredictable.
On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made substantial progress in the same timescale:
By the early 2020’s, there will be no practical theory of Friendly AI.
Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become. In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion. However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.
And that’s the Halloween nightmare scenario.
See Also:
View HTML: