“Machine Intelligence, Part 2”, 2015-03-02 (; backlinks):
[part 1] …But we will face this threat at some point, and we have a lot of work to do before it gets here. So here is a suggestion.
The US government, and all other governments, should regulate the development of SMI. In an ideal world, regulation would slow down the bad guys and speed up the good guys—it seems like what happens with the first SMI to be developed will be very important.
Although my general belief is that technology is often over-regulated, I think some regulation is a good thing, and I’d hate to live in a world with no regulation at all. And I think it’s definitely a good thing when the survival of humanity is in question. (Incidentally, there is precedent for classification of privately-developed knowledge when it carries mass risk to human life. SILEX is perhaps the best-known example.)
To state the obvious, one of the biggest challenges is that the US has broken all trust with the tech community over the past couple of years. We’d need a new agency to do this.
I am sure that Internet commentators will say that everything I’m about to propose is not nearly specific enough, which is definitely true. I mean for this to be the beginning of a conversation, not the end of one.
The first serious dangers from SMI are likely to involve humans and SMI working together. Regulation should address both the case of malevolent humans intentionally misusing machine intelligence to, for example, wreak havoc on worldwide financial markets or air traffic control systems, and the “accident” case of SMI being developed and then acting unpredictably.
Specifically, regulation should:
Provide a framework to observe progress. This should happen in two ways.
The first is looking for places in the world where it seems like a group is either being aided by substantial machine intelligence or training such an intelligence in some way.
The second is observing companies working on SMI development. The companies shouldn’t have to disclose how they’re doing what they’re doing (though when governments gets serious about SMI they are likely to out-resource any private company), but periodically showing regulators their current capabilities seems like a smart idea.
Given how disastrous a bug could be, require development safeguards to reduce the risk of the accident case. For example, beyond a certain checkpoint, we could require development happen only on airgapped computers, require that self-improving software require human intervention to move forward on each iteration, require that certain parts of the software be subject to third-party code reviews, etc. I’m not very optimistic than any of this will work for anything except accidental errors—humans will always be the weak link in the strategy (see the AI-in-a-box thought experiments). But it at least feels worth trying.
Being able to do this—if it is possible at all—will require a huge amount of technical research and development that we should start intensive work on now. This work is almost entirely separate from the work that’s happening today to get piecemeal machine intelligence to work.
To state the obvious but important point, it’s important to write the regulations in such a way that they provide protection while producing minimal drag on innovation (though there will be some unavoidable cost).
…Although it’s possible that a lone wolf in a garage will be the one to figure SMI out, it seems more likely that it will be a group of very smart people with a lot of resources. It also seems likely, at least given the current work I’m aware of, it will involve US companies in some way (though, as I said above, I think every government in the world should enact similar regulations).
Some people worry that regulation will slow down progress in the US and ensure that SMI gets developed somewhere else first. I don’t think a little bit of regulation is likely to overcome the huge head start and density of talent that US companies currently have.
…Thanks to Dario Amodei (especially Dario), Paul Buchheit, Matt Bush, Patrick Collison, Holden Karnofsky, Luke Muehlhauser, and Geoff Ralston for reading drafts of this and the previous post.
If you want to try to guess when, the two things I’d think about are computational power and algorithmic development. For the former, assume there are about 100 billion neurons and 100 trillion synapses in a human brain, and the average neuron fires 5× per second, and then think about how long it will take on the current computing trajectory to get a machine with enough memory and flops to simulate that.
For the algorithms, neural networks and reinforcement learning have both performed better than I’ve expected for input and output respectively (eg. captioning photos depicting complex scenes, beating humans at video games the software has never seen before with just the ability to look at the screen and access to the controls). I am always surprised how unimpressed most people seem with these results. Unsupervised learning has been a weaker point, and this is probably a critical part of replicating human intelligence. But many researchers I’ve spoken to are optimistic about current work, and I have no reason to believe this is outside the scope of a Turing machine.
View HTML: