“DeepMind: The Podcast—Excerpts on AGI”, 2022-04-07 ():
DeepMind: The Podcast—Season 2 was released over the last ~1–2 months. The 2 episodes most relevant to AGI are:
- The road to AGI—DeepMind: The Podcast (S2, Ep5) and
- The promise of AI with Demis Hassabis—DeepMind: The Podcast (S2, Ep9)
…The road to AGI (S2, Ep5)
(Published February 15, 2022)
Shane Legg’s AI Timeline; Shane Legg (4:03):
If you go back 10–12 years ago the whole notion of AGI was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. […] [I had that happen] multiple times. I have met quite a few of them since. There have even been cases where some of these people have applied for jobs at DeepMind years later. But yeah, it was a field where you know there were little bits of progress happening here and there, but powerful AGI and rapid progress seemed like it was very, very far away. […] Every year it [the number of people who roll their eyes at the notion of AGI] becomes less.
Hannah Fry (5:02):
For over 20 years, Shane has been quietly making predictions of when he expects to see AGI.
Shane Legg (5:09):
I always felt that somewhere around 2030-ish it was about a 50–50 chance. I still feel that seems reasonable. If you look at the amazing progress in the last 10 years and you imagine in the next 10 years we have something comparable, maybe there’s some chance that we will have an AGI in a decade. And if not in a decade, well I don’t know, say 3 decades or so.
…Demis Hassabis (7:11):
So I think that the progress so far has been pretty phenomenal. I think that it’s [AGI] coming relatively soon in the next you know—I wouldn’t be super surprised—in the next decade or 2.
[on convergent instrumental drives creating emergence]
Hannah Fry (21:59):
I put this question about the difficulty of designing an all-powerful reward to David Silver.
David Silver (22:05):
I actually think this is just slightly off the mark—this question—in the sense that maybe we can put almost any reward into the system and if the environment’s complex enough amazing things will happen just in maximizing that reward. Maybe we don’t have to solve this “What’s the right thing for intelligence to really emerge at the end of it?” kind of question and instead embrace the fact that there are many forms of intelligence, each of which is optimizing for its own target. And it’s okay if we have AIs in the future some of which are trying to control satellites and some of which are trying to sail boats and some of which are trying to win games of chess and they may all come up with their own abilities in order to allow that intelligence to achieve its end as effectively as possible.
[…] (26:14)
But of course this is a hypothesis. I cannot offer any guarantee that reinforcement learning algorithms do exist which are powerful enough to just get all the way there. And yet the fact that if we can do it, it would provide a path all the way to AGI should be enough for us to try really really hard.
See Also:
View External Link: