“Ezra Klein Interviews Sam Altman”, Ezra Klein, Sam Altman2021-06-11 (, )⁠:

Sam Altman: …A couple of years ago, if you talked about general purpose AI at all, people said that’s ridiculous, it’s not happening. If you talked about systems that could really do meta-learning and learn new concepts quickly that they weren’t trained for, people so that’s not going to happen. And we’ve gone from a world where many of the experts in the field said that was sci-fi and irresponsible to talk about to clear existential proof that we have it.

And it certainly doesn’t seem to be slowing down. Moore’s law, in varying definitions—but let’s say that was like a doubling of transistors every two years—maybe AI is growing at a rate of 10× per year in terms of these model sizes and the associated capabilities. So I do think we’re on a very steep curve. We will hit limits, but we don’t know where those will be. We’ll also discover new things that are really powerful. We don’t know what those will be either.

We’re deep in the scientific discovery phase, which is awesome. It’s so exciting and fun. But I think what we can say is that we are on an exponential curve. And when you’re on an exponential curve, you should generally, in my opinion, take the assumption that it’s going to keep going. And humans are very bad at intuition for this. Trying to think about exponential curves for stock prices, for technology, for population growth, whatever—

Ezra Klein: For viruses.

S Altman: Viruses. Very difficult. I had this moment that I’m sure you had one, too, when it looked like COVID was really going to take off and most of the world wasn’t paying attention. And I was walking the streets of San Francisco one night at like 10:00PM walking home and everybody was just like frolicking and doing their thing. And it was just like, this is the last moment of normalcy and no one’s paying attention, because most people don’t understand exponential curves. And it was such a strange feeling and I’ve often thought about the parallels of that moment that hit so viscerally with AI.


E Klein: OpenAI begins as a nonprofit.

S A: Yeah.

E K: It becomes a for-profit, in part, because it needs to raise money and resources. So it got a billion investment that’s partially money, partially compute power from Microsoft.

A: Actually it was all in cash, but we spent most of it on compute—

K: Oh, there you go.

A: Close enough.

K: But partially on Microsoft computing power, correct?

A: Yeah.

K: Yeah. One of the worries I have about this is that even if people want to be very cautious about what the incentives of it are, that just in order to do it, you have to submit to those incentives. That just in order to raise the money, there has to be a business model, a backer. And I was reading that and I wondered this from a different direction, too. Was that a missed opportunity for the public sector? Should it be that the public sector is spending the money to build this either by funding groups like yours or a consortium of academic groups or something?

A: A little known fact, we tried to get the public sector to fund us before we went to the capped profit model. There was no interest.

[SV rumor (eg. Leopold Aschenbrenner’s podcast, and something I have heard from an ex-OA exec) has it OA leadership went as far as brainstorming an explicit auction of OA’s AGI between US, Russia, and China to try to get funding.]

But yeah, I think if the country were working a different way—I would say a better way—this would be a public sector project. But it’s not and here we are. And I think it’s important that there is an effort like ours doing this. That even if not an official American flag effort, we’ll represent some of the values that we all hold dear. That’s better than a lot of other ways I could imagine someone else doing this project with us going.


A: And one of the incentives that we were very nervous about was the incentive for unlimited profit, where more is always better. And I think you can see ways that’s gone wrong with profit, or attention, or usage, or whatever, where if you have this well-meaning people in a room, but they’re trying to make a metric go up into the right, some weird stuff can happen. And I think with these very powerful general purpose AI systems, in particular, you do not want an incentive to maximize profit indefinitely.

So by putting this voluntary cap on ourselves above which none of the employees or investors get any more money—which I think if you do have a powerful AI, will be somewhat trivial to hit—I think we avoid the worst of the incentives or at least the one that we were most worried about.

K: How about speed? So there’s an incentive to get there first. There’s going to be huge financial returns and otherwise returns to being the first one. You’re to some degree in a race with other companies to do it. I’m not saying that leads to cutting corners, but it leads to things where maybe you’d ideally want to wait for a governance structure to emerge, want to wait for a public conversation to happen. Well, if you don’t do it, the other folks will. And one of the constant ways things get justified in both government and business is that better us than them.

A: For sure. So I think we were able to design a system that addressed a lot of the incentives that I was particularly concerned about. The one that remains that I am—for the entire field, not just us—most concerned about is actually closer to the super powerful systems like the ones that people talk about creating an existential risk to humanity where there’s a race condition. And that I think will be on us and the other players in the field to put together a sufficient coalition to stop ourselves from racing when safety is in the balance.

And we’re trying to figure out how to do that. That’s part of the governance question. Before you push go on this extremely powerful system, you would like as much time as you can get—and it won’t be totally in your control, because some other government can be doing whatever. But you’d like as much time as you can have to be really thoughtful about do we understand what the system is going to do.

K: And how do you get that time?

A: You have people partner and say, OK, lots of other industries have done this. I think the recombinant DNA conversations in the 1970s are a good example. But you say, we’re the experts on this. We’re the set of companies with the resources to do this. How can we work together to make sure we all have the time we need?

K: Is there a different kind of pressure that comes from the geopolitical push? It was interesting to me you used the metaphor of an American flag operation. So there’s a competition between you and other American companies. And then there is the sense that there’s also a competition from China, potentially, certainly down the road other countries. That can create a different kind of pressure. It could even be a pressure coming from the public sector. But a pressure to finish first.

A: This is something that some parts of the field have spoken about for a long time, which is, sure, the private sector companies can do whatever they want. But what if there’s huge public sector pressure? And that I don’t think we’d be in a position to have too much of an impact on. You—

K: Mentioned before the cap on profits, which I believe for you all is 100×.

A: It started as that. It’s come down every time we’ve raised more money. So it’s the single digits now.


A: Can I recommend—both because I think they’re more likely to get read and I think they’re more relevant to this conversation. I don’t think there’s any great books about AI, but there are good short stories. So could I recommend short stories?

“Crystal Nights” by Greg Egan, “The Last Question” by Isaac Asimov, and “The Gentle Seduction” by Marc Stiegler. They’re all about the development of a super powerful AI in very different ways.

Actually, if I can recommend a bonus 4th one. This is a blog post, not a short story, but it really touches on a lot of this societal governance power issues we’re talking about relative to AI “Meditations on Moloch.” It’s a blog post on Slate Star Codex. I strongly recommend that one.