“Demis Hassabis: DeepMind—AI, Superintelligence & the Future of Humanity § Turing Test”, 2022-07-01 ():
Demis Hassabis: …Yeah, I think certainly we as humans use language as our main generalization communication tool, so I think we end up thinking in language and expressing our solutions in language. So it’s going to be a very powerful mode for the system in which to explain what it’s doing.
But I don’t think it’s the only modality that matters, so I think there’s gonna be a lot of—you know, there’s there’s a lot of different ways to express capabilities other than just language.
Lex Fridman Yeah: vision, robotics, body language, action is the interactive aspect of all that—that’s all part of it—
D Hassabis: But what’s interesting with Gato is that it’s sort of pushing prediction to the maximum—in terms of, like you know, mapping arbitrary sequences to other sequences, and sort of just predicting what’s going to happen next. So prediction seems to be fundamental to intelligence.
L Fridman: And what you’re predicting doesn’t so much matter.
Hassabis: Yeah, it seems like you can generalize that quite well. So obviously language models predict the next word, Gato predicts potentially any action or any token and… it’s just the beginning really, it’s our most general agent so far, one could call it. But that itself can be scaled up massively more than we’ve done so far. Obviously, we’re in the middle of doing that.
Fridman: But the big part of solving AGI is creating benchmarks that help us get closer and closer—sort of creating benchmarks that test the generalizability—and it’s just still interesting that this fella Alan Turing was one of the first and probably still one of the only people that was trying, maybe philosophically, but was trying to formulate a benchmark that could be followed.
It is, even though it’s fuzzy, it’s still sufficiently rigorous to where you can run that test, and I still think something like the Turing test will, at the end of the day, be the thing that truly impresses other humans. So that you can have a close friend who’s an AI system, for that friend to be a good friend, they’re going to have to be able to play StarCraft and they’re going to have to do all of these tasks—get you a beer, so the robotics tasks, play games with you, so use language & humor & all those kinds of things, but that ultimately can boil down to language.
It feels like, not in terms of the AI community, but in terms of the actual impact of general intelligence on the world, it feels like language will be the place where it truly shines.
D H: I think so because it’s such an important kind of input-output for us. I think you’re right. I think the Turing Test—the kind of the philosophy behind it, which is the idea of can a machine mimic the behaviors of a human, and I would say wider than just language, any text—then you know, in terms of actions and everything else and creativity and all these things, then if it can sort of match or exceed human cognitive capabilities, then I think we have a true intelligence. So I think from that perspective—you’re right, I think he did formulate the right kind of setup.
F: I just think there’ll be a kind of humor in the AI systems of the future looking back to this conversation, and thinking about the Turing test, and also thinking about how by that time they would know which year they were finally able to sort of cross the threshold of human level intelligence, and think how funny it is that we humans were still confused about this whole problem that has been long ago been solved.
Well—hello to the AI agents of the future!