“The ChatGPT King Isn’t Worried, but He Knows You Might Be: Sam Altman Sees the Pros and Cons of Totally Changing the World As We Know It. And If He Does Make Human Intelligence Useless, He Has a Plan to Fix It.”, 2023-03-31 (; backlinks):
I first met Sam Altman in the summer of 2019, days after Microsoft agreed to invest $1 billion in his 3-year-old start-up, OpenAI. At his suggestion, we had dinner at a small, decidedly modern restaurant not far from his home in San Francisco.
Halfway through the meal, he held up his iPhone so I could see the contract he had spent the last several months negotiating with one of the world’s largest tech companies. It said Microsoft’s billion-dollar investment would help OpenAI build what was called artificial general intelligence, or AGI: a machine that could do anything the human brain could do.
Later, as Altman sipped a sweet wine in lieu of dessert, he compared his company to the Manhattan Project. As if he were chatting about tomorrow’s weather forecast, he said the U.S. effort to build an atomic bomb during the Second World War had been a “project on the scale of OpenAI—the level of ambition we aspire to.”…At one point during our dinner in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible”, he said. (Altman pointed out that, as fate would have it, he and Oppenheimer share a birthday.)…He believed AGI would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm—spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
“I try to be upfront”, he said. “Am I doing something good? Or really bad?”
…That means he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right”, said OpenAI’s president, Greg Brockman…Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself. “In a single conversation”, she said, “he is both sides of the debate club.”
…He believes that artificial intelligence will happen one way or another, that it will do wonderful things that even he can’t yet imagine and that we can find ways of tempering the harm it may cause.
It’s an attitude that mirrors Altman’s own trajectory. His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills—not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
…His grand idea is that OpenAI will capture much of the world’s wealth through the creation of AGI and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures—$100 billion, $1 trillion, $100 trillion.
If AGI does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
…His longtime mentor, Paul Graham, founder of Y Combinator, explained Altman’s motivation like this:
“Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
…In the early 2000s, Altman, a 17-year-old student at John Burroughs, set out to change the school’s culture, individually persuading teachers to post “Safe Space” signs on their classroom doors as a statement in support of gay students like him. He came out during his senior year and said the St. Louis of his teenage years was not an easy place to be gay.
Kepchar, who taught the school’s Advanced Placement computer science course, saw Altman as one of her most talented computer science students—and one with a rare knack for pushing people in new directions.
“He had creativity and vision, combined with the ambition and force of personality to convince others to work with him on putting his ideas into action”, she said. Altman also told me that he had asked one particularly homophobic teacher to post a “Safe Space” sign just to troll the guy.
Graham, who worked alongside Altman for a decade, saw the same persuasiveness in the man from St. Louis.
“He has a natural ability to talk people into things”, Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.’”
He now says that during his short stay at Stanford, he learned more from the many nights he spent playing poker than he did from most of his other college activities. After his freshman year, he worked in the artificial intelligence and robotics lab overseen by Prof. Andrew Ng, who would go on to found the flagship AI lab at Google. But poker taught Altman how to read people and evaluate risk.
It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information”, he told me while strolling across his ranch in Napa. “It’s a great game.”
…Greg Brockman, OpenAI’s president, said Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person—and then figure out how to give it to them”, Brockman told me. “That is the algorithm he uses over and over.”
…Kevin Scott of Microsoft believes that Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg. “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world”, he said. “I think Sam is going to be one of those people.”
After selling Loopt for a modest return, he joined Y Combinator as a part-time partner. 3 years later, Graham stepped down as president of the firm and, to the surprise of many across Silicon Valley, tapped a 28-year-old Altman as his successor.
Altman is not a coder or an engineer or an AI researcher. He is the person who sets the agenda, puts the teams together and strikes the deals. As the president of Y Combinator, he expanded the firm with near abandon, starting a new investment fund and a new research lab and stretching the number of companies advised by the firm into the hundreds each year.
He also began working on several projects outside the investment firm, including OpenAI, which he founded as a nonprofit in 2015 alongside a group that included Elon Musk. By Altman’s own admission, YC grew increasingly concerned he was spreading himself too thin.
[He was actually fired from YC by Paul Graham et al.]
He resolved to refocus his attention on a project that would, as he put it, have a real impact on the world. He considered politics, but settled on artificial intelligence. He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through AI research, as opposed to the many people who could do so through politics.
In 2019, just as OpenAI’s research was taking off, Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
…After running into Satya Nadella, Microsoft’s chief executive, at an annual gathering of tech leaders in Sun Valley, Idaho [the Allen & Company Sun Valley Conference]—often called “summer camp for billionaires”—he personally negotiated a deal with Nadella and Microsoft’s chief technology officer, Kevin Scott. [see the email Kevin Scott sent afterwards]
A few years later, Altman texted his brothers again, saying he planned to raise an additional $10 billion—or, as he put it, “10 bills.” By this January, he had done this, too, signing another contract with Microsoft.
…Eliezer Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
He also helped spawn the vast online community of rationalists and effective altruists who are convinced that AI is an existential risk. This surprisingly influential group is represented by researchers inside many of the top AI labs, including OpenAI. They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them. He told me that it would be a “very slow takeoff.”
When I asked Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
If he’s wrong, he thinks he can make it up to humanity.