“Halloween Nightmare Scenario, Early 2020’s”, David Wood2009-11-02 (, ; backlinks)⁠:

On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” [slides, blog] to an audience of 50 people at the UK Humanity+ meeting in Birkbeck College.

…The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines”.

As stated in slide 38, on brain reinforcement learning (RL):

This area of research is currently progressing very quickly.

New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.

I’ve asked a number of researchers in this area:

Typical answer:

Adding up these 3 assumptions, the first conclusion is:

The second conclusion is that, inevitably:

But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being—with results that might be completely unpredictable.

On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made substantial progress in the same timescale:

Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become. In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion. However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.

And that’s the Halloween nightmare scenario.