“OpenAI’s Sam Altman Talks ChatGPT And How Artificial General Intelligence Can ‘Break Capitalism’”, 2023-02-03 ():
Q: It feels to me like we are at an inflection point with the popularity of ChatGPT, the push to monetize it and all this excitement around the partnership with Microsoft. From your standpoint, where does OpenAI feel like it is in its journey? And how would you describe the inflection point?
Sam Altman: It’s definitely an exciting time. But my hope is that it’s still extremely early. Really this is going to be a continual exponential path of improvement of the technology and the positive impact it has on society. We could have said the same thing at the GPT-3 launch or at the DALL·E launch. We’re saying it now [with ChatGPT]. I think we could say it again later. Now, we may be wrong, we may well hit a stumbling block we haven’t or don’t expect. But I think there’s a real chance that we actually have figured out something important here and this paradigm will take us very, very far.
Q: Were you surprised by the response to ChatGPT?
S. Altman: I wanted to do it because I thought it was going to work. So, I’m surprised somewhat by the magnitude. But I was hoping and expecting people were going to really love it.
Q: [OpenAI President] Greg Brockman told me that the team wasn’t even sure it was worth launching. So not everyone felt that way.
Altman: There’s a long history of the team not being as excited about trying to ship things. And we just say, “Let’s just try it. Let’s just try it and see what happens.” This one, I pushed hard for this one. I really thought it was gonna work.
Q: You’ve said in the past you think people might be surprised about how really ChatGPT came together or is run. What would you say is misunderstood?
S A: So, one of the things is that the base model [
text-davinci-003?] for ChatGPT had been in the API for a long time, you know, like 10 months, or whatever. [Editor’s note: ChatGPT is an updated version of the GPT-3 model, first released as an API in 2020.] And I think one of the surprising things is, if you do a little bit of fine tuning to get [the model] to be helpful in a particular way, and figure out the right interaction paradigm, then you can get this. It’s not actually fundamentally new technology that made this have a moment. It was these other things. And I think that is not well understood. Like, a lot of people still just don’t believe us, and they assume this must be GPT-4.
Q: …Do you feel that we are close to the goal of something like an AGI? And how would we know when that version of GPT, or whatever it is, is getting there?
A: I don’t think we’re super close to an AGI. But the question of how we would know is something I’ve been reflecting on a great deal recently. The one update I’ve had over the last 5 years, or however long I’ve been doing this—longer than that—is that it’s not going to be such a crystal clear moment. It’s going to be a much more gradual transition. It’ll be what people call a “slow takeoff”. And no one is going to agree on what the moment was when we had the AGI.
Q: …What would you say to people who might be concerned that you’re hitching your wagon to [CEO] Satya [Nadella] and Microsoft?
A: I would say we have carefully constructed any deals we’ve done with them to make sure we can still fulfill our mission. And also, Satya and Microsoft are awesome. I think they are, by far, the tech company that is most aligned with our values. And every time we’ve gone to them and said, “Hey, we need to do this weird thing that you’re probably going to hate, because it’s very different than what a standard deal would do, like capping your return or having these safety override provisions”, they have said, “That’s awesome.”
Q: …What has been the coolest thing you’ve seen someone do with GPT so far? And what’s the thing that scares you most?
A: It’s really hard to pick one coolest thing. It has been remarkable to see the diversity of things people have done. I could tell you the things that I have found the most personal utility in. Summarization has been absolutely huge for me, much more than I thought it would be. The fact that I can just have full articles or long email threads summarized has been way more useful than I would have thought. Also, the ability to ask esoteric programming questions or help debug code in a way that feels like I’ve got a super brilliant programmer that I can talk to.
As far as a scary thing? I definitely have been watching with great concern the revenge porn generation that’s been happening with the open source image generators [eg. Stable Diffusion]. I think that’s causing huge and predictable harm.