“Anthropic CEO Dario Amodei on Being an Underdog, AI Safety, and Economic Inequality”, 2024-06-23 ():
[see also his podcast with the Norwegian sovereign wealth fund] Hanging on the wall of Anthropic’s offices in San Francisco in early May, a stone’s throw from the conference room where CEO Dario Amodei would shortly sit for an interview with TIME, was a framed meme. Its single panel showed a giant robot ransacking a burning city. Underneath, the image’s tongue-in-cheek title: “Deep learning is hitting a wall.”…
Q: Anthropic is the youngest “frontier” AI lab. It’s the smallest. Its competitors have more cash. And in many ways, it’s the most expressly committed to safety. Do you think of yourselves as underdogs?
Dario Amodei: Certainly all the facts you cite are true. But it’s becoming less true. We have these big compute deals. We have high single-digit billions in funding. Whereas our competitors have low double-digits to, somewhat frighteningly, triple-digits. So I’m starting to feel a little bit awkward about calling ourselves the underdogs. But relative to the heavyweights in the field, I think it’s absolutely true.
Q: The AI “scaling laws” say that as you train systems with more computing power and data, they become predictably more capable, or powerful. What capabilities have you seen in systems that you’re not able to release yet?
D Amodei: We released our last model relatively recently, so it’s not like there’s a year of unreleased secrets. And to the extent that there are, I’m not going to go into them. But we do know enough to say that the progress is continuing. We don’t see any evidence that things are leveling off. The reality of the world we live in is that it could stop any time. Every time we train a new model, I look at it and I’m always wondering—I’m never sure in relief or concern—[if] at some point we’ll see, oh man, the model doesn’t get any better. I think if [the effects of scaling] did stop, in some ways that would be good for the world. It would restrain everyone at the same time. But it’s not something we get to choose—and of course the models bring many exciting benefits. Mostly, it’s a fact of nature. We don’t get to choose, we just get to find out which world we live in, and then deal with it as best we can.
Q: What did you set out to achieve with Anthropic’s culture?
D A: We have a donation matching program. [Anthropic allows employees to donate up to 25% of their equity to any charity, and will match the donation: “Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant”] That’s inclined to attract employees for whom public benefit is appealing. Our stock grants look like a better deal if that’s something you value. It doesn’t prevent people from having financial incentives, but it helps to set the tone.
In terms of the safety side of things, there’s a little bit of a delta between the public perception and what we have in mind. I think of us less as an AI safety company, and more think of us as a company that’s focused on public benefit. We’re not a company that believes a certain set of things about the dangers that AI systems are going to have. That’s an empirical question. I more want Anthropic to be a company where everyone is thinking about the public purpose, rather than a one-issue company that’s focused on AI safety or the misalignment of AI systems. Internally I think we’ve succeeded at that, where we have people with a bunch of different perspectives, but what they share is a real commitment to the public purpose.
Q: …Is it responsible to use a strategy like that if the risks are so high?
A: I’d prefer to live in an ideal world. Unfortunately in the world we actually live in, there’s a lot of economic pressure, there’s not only competition between companies, there’s competition between nations. But if we can really demonstrate that the risks are real, which I hope to be able to do, then there may be moments when we can really get the world to stop and consider for at least a little bit. And do this right in a cooperative way. But I’m not naive—those moments are rare, and halting a very powerful economic train is something that can only be done for a short period of time in extraordinary circumstances.
Q: Microsoft recently said it is training a large language model in-house that might be able to compete with OpenAI, which they’ve invested in. Amazon, which is one of your backers, is doing the same. So is Google. Are you concerned that the smaller AI labs, which are currently out in front, might just be in a temporary lead against these much better-resourced tech companies?
A: I think a thing that has been prominent in our culture has been “do more with less.” We always try to maintain a situation where, with less computational resources, we can do the same or better than someone who has many more computational resources. So in the end, our competitive advantage is our creativity as researchers, as engineers. I think increasingly in terms of creative product offerings, rather than pure compute. I think you need a lot of pure compute. We have a lot, and we’re gonna get more. But as long as we can use it more efficiently, as long as we can do more with less, then in the end, the resources are going to find their way to the innovative companies.
Q: Anthropic has raised around $7 billion, which is enough money to pay for the training of a next-generation model, which you’ve said will likely cost in the single-digit billions of dollars. To train the generation after that, you’re starting to have to look at raising more cash. Where do you think that will come from?
A: I think we’re pretty good for the one after the next one as well, although in the history of Anthropic, we’re going to need to raise more. It’s hard to say where that [will come from]. There are traditional investors. There are large compute providers. And there are miscellaneous other sources that we may not have thought of. [Like Scandinavian or Middle Eastern sovereign wealth funds…?]