“Does Sam Altman Know What He’s Creating? The OpenAI CEO’s Ambitious, Ingenious, Terrifying Quest to Create a New Form of Intelligence”, Ross Andersen2023-07-24 (, )⁠:

…In small doses, Sam Altman’s large blue eyes emit a beam of earnest intellectual attention, and he seems to understand that, in large doses, their intensity might unsettle. In this case, he was willing to chance it: He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.

“We could have gone off and just built this in our building here for 5 more years”, he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.” Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.

…Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are’”, he said.

…Altman has entertained the most far-out scenarios. “When I was a younger adult”, he said, “I had this fear, anxiety … and, to be honest, 2% of excitement mixed in, too, that we were going to create this thing” that “was going to far surpass us”, and “it was going to go off, colonize the universe, and humans were going to be left to the solar system.” “As a nature reserve?” I asked. “Exactly”, he said. “And that now strikes me as so naive.”


…The first few years at OpenAI were a slog, in part because no one there knew whether they were training a baby or pursuing a spectacularly expensive dead end. “Nothing was working, and Google had everything: all the talent, all the people, all the money”, Altman told me. The founders had put up millions of dollars to start the company, and failure seemed like a real possibility. Greg Brockman, the 35-year-old president, told me that in 2017, he was so discouraged that he started lifting weights as a compensatory measure. He wasn’t sure that OpenAI was going to survive the year, he said, and he wanted “to have something to show for my time.”

…As for other changes to the company’s structure and financing, he told me he draws the line at going public [like Microsoft]. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street”, he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.

…When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels. Brockman told me that he wanted to spend every waking moment with the model. “Every day it’s sitting idle is a day lost for humanity”, he said, with no hint of sarcasm. Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me”, Jang told me.

…I saw Altman again in June, in the packed ballroom of a slim golden high-rise that towers over Seoul. He was nearing the end of a grueling public-relations tour through Europe, the Middle East, Asia, and Australia, with lone stops in Africa and South America. I was tagging along for part of his closing swing through East Asia. [This trip also involved, apparently, negotiating deals with G42 and work on his DL chip startup.] The trip had so far been a heady experience, but he was starting to wear down. He’d said its original purpose was for him to meet OpenAI users. It had since become a diplomatic mission. He’d talked with more than 10 heads of state and government, who had questions about what would become of their countries’ economies, cultures, and politics. The event in Seoul was billed as a “fireside chat”, but more than 5,000 people had registered. After these talks, Altman is often mobbed by selfie seekers, and his security team keeps a close eye.

…Altman did not visit China on his tour, apart from a video appearance at an AI conference in Beijing. ChatGPT is currently unavailable in China, and Altman’s colleague Ryan Lowe told me that the company was not yet sure what it would do if the government requested a version of the app that refused to discuss, say, the Tiananmen Square massacre. When I asked Altman if he was leaning one way or another, he didn’t answer. “It’s not been in my top-10 list of compliance issues to think about”, he said.

Until that point, he and I had spoken of China only in veiled terms, as a civilizational competitor. We had agreed that if artificial general intelligence is as transformative as Altman predicts, a serious geopolitical advantage will accrue to the countries that create it first, as advantage had accrued to the Anglo-American inventors of the steamship. I asked him if that was an argument for AI nationalism. “In a properly functioning world, I think this should be a project of governments”, Altman said.


…it is a good thing that a large, essential part of the global economy is intent on regulating state-of-the-art AIs, because as their creators so often remind us, the largest models have a record of popping out of training with unanticipated abilities. Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.

Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10× more powerful” than its predecessor; they had no idea what they might be dealing with. After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors. She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice. A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.

Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about”, Altman has said. A taxonomy would have to do. “If it’s good enough at chemistry to make methamphetamine, I don’t need to have somebody spend a whole ton of energy” on whether it can make heroin, Dave Willner, OpenAI’s head of trust and safety, told me. GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too. [cf. Labenz’s account]

Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror”, Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist-forum lore: “You could say, ‘How do I convince this person to date me?’” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”

…Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back”, he said. When I asked him what these future jobs might look like, he said he doesn’t know. He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists? I wondered.) His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors. He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill”, he said. “You have a computer that can do anything; what should it go do?”

…Over the next 4 years, OpenAI has pledged to devote a portion of its supercomputer time—20% of what it has secured to date—to Sutskever’s alignment work. The company is already looking for the first inklings of misalignment in its current AIs. The one that the company built and decided not to release—Altman would not discuss its precise function—is just one example.

…Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.” It was a chilling thought, but one that Geoffrey Hinton seconded. “We need to do empirical experiments on how these things try to escape control”, Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”

Putting aside any near-term testing, the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs. When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play DoTA 2. “They were localized to the video-game world”, Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy”, Sutskever said. Watching them had helped him imagine what a superintelligence might be like.

“The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing”, Sutskever told me.

Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation”, Sutskever said. Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”

…“First of all, I think that whether the chance of existential calamity is 0.5% or 50%, we should still take it seriously”, Altman said. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested 4 viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly. Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.

Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things”, he said, and these are only the ones we can imagine.

Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI. In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary. Other experts have proposed a non-networked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance. Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.

…Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run”, the OpenAI researcher Nick Ryder told me.

…As a leader of this effort, Altman has much to recommend him: He is extremely intelligent; he thinks more about the future, with all its unknowns, than many of his peers; and he seems sincere in his intention to invent something for the greater good. But when it comes to power this extreme, even the best of intentions can go badly awry. Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his, and if he is right about what’s coming, they will assume an outsize influence in shaping the way that all of us live. No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning…Altman has served notice. He says that he welcomes the constraints and guidance of the state.