“John Carmack’s ‘Different Path’ to Artificial General Intelligence”, John Carmack2023-02-02 (, , )⁠:

Exclusive Q&A: The iconic Dallas game developer, rocket engineer, and VR visionary has pivoted to an audacious new challenge: developing artificial general intelligence—a form of AI that goes beyond mimicking human intelligence to understanding things and solving problems. John Carmack sees a 60% chance of achieving initial success in AGI by 2030. Here’s how, and why, he’s working independently to make it happen.


…There’s valuable things that happened earlier that people aren’t necessarily aware of. There’s some work from like the 1970s, 1980s, and 1990s that I actually think might be interesting, because a lot of things happened back then that didn’t pan out, just because they didn’t have enough scale. They were trying to do this on one-megahertz computers, not clusters of GPUs.

And there is this kind of groupthink I mentioned that is really clear, if you look at it, about all these brilliant researchers—they all have similar backgrounds, and they’re all kind of swimming in the same direction. So, there’s a few of these old things back there that I think may be useful. So right now, I’m building experiments, I’m testing things, I’m trying to marry together some of these fields that are distinct, that have what I feel are pieces of the AGI algorithm.

But most of what I do is I run simulations through watching lots of television and playing various video games. And I think that combination of, ‘Here’s how you perceive and internalize a model of the world, and here’s how you act in it with agency in some of these situations’, I still don’t know how they come together. But I think there are keys there. I think I have my arms around the scope of the problems that need to be solved, and how to push things together.

I still think there’s a half dozen insights that need to happen, but I’ve got a couple of things that are plausible insights that might turn out to be relevant. And one of the things that I trained myself to do a few decades ago is pulling ideas out and pursuing them in a way where I’m excited about them, knowing that most of them don’t pan out in the end. Much earlier in my career, when I’d have a really bright idea that didn’t work out, I was crushed afterwards. But eventually I got to the point where I’m really good at just shoveling ideas through my processing and shooting them down, almost making it a game to say, ‘How quickly can I bust my own idea, rather than protecting it as a pet idea?’

So, I’ve got a few of these candidates right now that I’m in the process of exploring and attacking. But it’s going to be these abstract ideas and techniques and ways to apply things that are similar to the way deep learning is done right now.

So, I’m pushing off scaling it out, because there are a bunch of companies now saying, ’We need to go raise $100 million, $200 million, because we need to have a warehouse full of GPUs.’ And that’s one path to value, and there’s a little bit of a push toward that. But I’m very much pushing toward saying, ’No, I want to figure out these 6 important things before I go waste $100 million of someone’s money.’ I’m actually not spending much money right now. I raised $20 million, but I’m thinking that this is a decade-long task where I don’t want to burn through $20 million in the next two years, then raise another series to get another couple hundred million dollars, because I don’t actually think that’s the smart way to go about things.