“2023 CEO of the Year: Sam Altman”, 2023-12-06 (; backlinks):
Sam Altman was weary. He went to his Napa Valley ranch for a hike, then returned to San Francisco to spend a few hours with one of the board members who had just fired and reinstated him in the span of 5 frantic days [Adam D’Angelo]. He put his computer away for a few hours to cook vegetarian pasta, play loud music, and drink wine with his fiancee Oliver Mulherin. “This was a 10-out-of-10 crazy thing to live through”, Altman tells Time on Nov. 30. “So I’m still just reeling from that.”…So did OpenAI’s powerful investors; one even baselessly speculated that one of the directors who defenestrated Altman was a Chinese spy. In the end, Altman won back his job and the board was overhauled. “We really do feel just stronger and more unified and more focused than ever”, Altman says in the last of 3 interviews with Time, after his second official day back as CEO. “But I wish there had been some other way to get there.”
…Altman, 38, has been Silicon Valley royalty for a decade, a superstar founder with immaculate vibes…Interviews with more than 20 people in Altman’s circle—including current and former OpenAI employees, multiple senior executives, and others who have worked closely with him over the years—reveal a complicated portrait. Those who know him describe Altman as affable, brilliant, uncommonly driven, and gifted at rallying investors and researchers alike around his vision of creating artificial general intelligence (AGI) for the benefit of society as a whole.
But 4 people who have worked with Altman over the years also say he could be slippery—and at times, misleading and deceptive. Two people familiar with the board’s proceedings say that Altman is skilled at manipulating people, and that he had repeatedly received feedback that he was sometimes dishonest in order to make people feel he agreed with them when he did not. These people saw this pattern as part of a broader attempt to consolidate power. “In a lot of ways, Sam is a really nice guy; he’s not an evil genius. It would be easier to tell this story if he was a terrible person”, says one of them. “He cares about the mission, he cares about other people, he cares about humanity. But there’s also a clear pattern, if you look at his behavior, of really seeking power in an extreme way.”
An OpenAI spokesperson said the company could not comment on the events surrounding Altman’s firing. “We’re unable to disclose specific details until the board’s independent review is complete. We look forward to the findings of the review and continue to stand behind Sam”, the spokesperson said in a statement to Time. “Our primary focus remains on developing and releasing useful and safe AI, and supporting the new board as they work to make improvements to our governance structure.”
…The board had argued over how to replace the 3 departing members, according to 3 people familiar with the discussions. For some time—little by little, at different rates—the 3 independent directors and Ilya Sutskever were becoming concerned about Altman’s behavior.
Altman had a tendency to play different people off one another in order to get his desired outcome, say two people familiar with the board’s discussions. Both also say Altman tried to ensure information flowed through him. “He has a way of keeping the picture somewhat fragmented”, one says, making it hard to know where others stood. To some extent, this is par for the course in business, but this person says Altman crossed certain thresholds that made it increasingly difficult for the board to oversee the company and hold him accountable.
One example came in late October, when an academic paper Helen Toner wrote in her capacity at [CSET] Georgetown was published. Altman saw it as critical of OpenAI’s safety efforts and sought to push Toner off the board. Altman told one board member that another believed Toner ought to be removed immediately, which was not true, according to two people familiar with the discussions. [cf. WSJ]
This episode did not spur the board’s decision to fire Altman, those people say, but it was representative of the ways in which he tried to undermine good governance, and was one of several incidents that convinced the quartet that they could not carry out their duty of supervising OpenAI’s mission if they could not trust Altman. [What did spur it? Possibly a Slack discussion.]
Once the directors reached the decision, they felt it was necessary to act fast, worried Altman would detect that something was amiss and begin marshaling support or trying to undermine their credibility. “As soon as he had an inkling that this might be remotely on the table”, another of the people familiar with the board’s discussions says, “he would bring the full force of his skills and abilities to bear.”
…The board expected pressure from investors and media. But they misjudged the scale of the blowback from within the company, in part because they had reason to believe the executive team would respond differently, according to two people familiar with the board’s thinking, who say the board’s move to oust Altman was informed by senior OpenAI leaders, who had approached them with a variety of concerns about Altman’s behavior and its effect on the company’s culture. [cf NYT]
Legal and confidentiality reasons have made it difficult for the board to share specifics, the people with knowledge of the proceedings say. But the absence of examples of the “lack of candor” the board cited as the impetus for Altman’s firing contributed to rampant speculation—that the decision was driven by a personal vendetta, an ideological dispute, or perhaps sheer incompetence. The board fired Altman for “nitpicky, unfireable, not even close to fireable offenses”, says Ron Conway, the founder of SVAngel and a mentor who was one of the first people Altman called after being terminated. [And who, with Brian Chesky, talked Altman into trying to take over OA, see NYT.] “It is reckless and irresponsible for a board to fire a founder over emotional reasons.”
…Within hours, the company’s staff threatened to quit if the board did not resign and allow Altman to return. Under immense pressure, the board reached out to Altman the morning after his firing to discuss a potential path forward. Altman characterizes it as a request for him to come back. “I went through a range of emotions. I first was defiant”, he says. “But then, pretty quickly, there was a sense of duty and obligation, and wanting to preserve this thing I cared about so much.” The sources close to the board describe the outreach differently, casting it as an attempt to talk through ways to stabilize the company before it fell apart.
…After a tearful confrontation with Brockman’s wife [Anna Brockman], Sutskever flipped his position: “I deeply regret my participation in the board’s actions”, he posted in the early hours of November 20.
…“It’s unprecedented in history to see a company go potentially to zero if everybody walks”, says one of the people familiar with the board’s discussions. “It’s unsurprising that employees banded together in the face of that particular threat.”
…In the end, the remaining board members secured a few concessions in the agreement struck to return Altman as CEO. A new independent board would supervise an investigation into his conduct and the board’s decision to fire him. Altman and Brockman would not regain their seats, and D’Angelo would remain on the panel, rather than all independent members resigning. Still, it was a triumph for OpenAI’s leadership.
…10 days after the agreement was reached for their return, OpenAI’s leaders were resolute. “I think everyone feels like we have a second chance here to really achieve the mission. Everyone is aligned”, Brockman says. But the company is in for an overhaul. Sutskever’s future at the company is murky. The new board—former Twitter board chair Bret Taylor, former US Treasury Secretary Larry Summers, and D’Angelo—will expand back to 9 members and take a hard look at the company’s governance. “Clearly the current thing was not good”, Altman says.
OpenAI had tried a structure that would provide independent oversight, only to see it fall short. “One thing that has very clearly come out of this is we haven’t done a good job of solving for AI governance”, says Divya Siddarth, the co-founder of the Collective Intelligence Project, a nonprofit that works on that issue. “It has put into sharp relief that very few people are making extremely consequential decisions in a completely opaque way, which feels fine, until it blows up.”
Back in the CEO’s chair, Altman says his priorities are stabilizing the company and its relationships with external partners after the debacle; doubling down on certain research areas after the massive expansion of the past year [Q✱?]; and supporting the new board to come up with better governance. What that looks like remains vague. “If an oracle said, Here is the way to set up the structure that is best for humanity, that’d be great”, Altman says.
Whatever role he plays going forward will receive more scrutiny. “I think these events have turned him into a political actor in the mass public’s eye in a way that he wasn’t before”, says Colson, the executive director of AIPI, who believes the episode has highlighted the danger of having risk-tolerant technologists making choices that affect all of us. “Unfortunately, that’s the dynamic that the market has set up for.”
…Two people familiar with the board’s deliberations emphasize the stakes of supervising a company that believes it is building the most important technology in history. Altman thinks AGI—a system that surpasses humans in most regards—could be reached sometime in the next 4–5 years.
…“People are really starting to play for keeps now”, says Daniel Colson, executive director of the Artificial Intelligence Policy Institute (AIPI) and the founder of an Altman-backed startup, “because there’s an expectation that the window to try to shift the trajectory of things is closing.”
…On a bright morning in early November, Altman looks nervous. We’re backstage at a cavernous event space in downtown San Francisco, where Altman will soon present to some 900 attendees at OpenAI’s first developer conference. Dressed in a gray sweater and brightly colored Adidas Lego sneakers, he thanks the speech coach helping him rehearse. “This is so not my thing”, he says. “I’m much more comfortable behind a computer screen.”…Loopt became part of the first cohort of 8 companies to join Y Combinator, the now vaunted startup accelerator. The company was sold in 2012 for $59.18$432012 million, netting Altman $5 million. Though the return was relatively modest, Altman learned something formative: “The way to get things done is to just be really f—cking persistent”, he told Vox’s Re/code.
…Soon after becoming the leader of YC, Altman visited the headquarters of the nuclear-fusion startup Helion in Redmond, Washington. CEO David Kirtley recalls Altman showing up with a stack of physics textbooks and quizzing him about the design choices behind Helion’s prototype reactor. What shone through, Kirtley recalls, was Altman’s obsession with scalability. Assuming you could solve the scientific problem, how could you build enough reactors fast enough to meet the energy needs of the U.S.? What about the world? Helion was among the first hard-tech companies to join YC. Altman also wrote a personal check for $12.69$9.52014 million and has since forked over an additional $375 million to Helion—his largest personal investment. “I think that’s the responsibility of capitalism”, Altman says. “You take big swings at things that are important to get done.”
Altman’s pursuit of fusion hints at the staggering scope of his ambition. He’s put $180 million into Retro Biosciences, a longevity startup hoping to add 10 healthy years to the human life-span. He conceived of and helped found Worldcoin, a biometric-identification system with a crypto-currency attached, which has raised hundreds of millions of dollars. Through OpenAI, Altman has spent $13.36$102014 million seeding the longest-running study into universal basic income (UBI) anywhere in the US, which has distributed more than $40 million to 3,000 participants, and is set to deliver its first set of findings in 2024. Altman’s interest in UBI speaks to the economic dislocation that he expects AI to bring—though he says it’s not a “sufficient solution to the problem in any way.”
The entrepreneur was so alarmed at America’s direction under Donald Trump that in 2017 he explored running for governor of California. Today Altman downplays the endeavor as “a very lightweight consideration.” But Matt Krisiloff, a senior aide to Altman at the time, says they spent 6 months setting up focus groups across the state to help refine a political platform. “It wasn’t just a totally flippant idea”, Krisiloff says. Altman published a 10-point policy platform, which he dubbed the United Slate, with goals that included lowering housing costs, Medicare for All, tax reform, and ambitious clean-energy targets. He ultimately passed on a career switch. [cf. New Yorker] “It was so clear to me that I was much better suited to work on AI”, Altman says, “and that if we were able to succeed, it would be a much more interesting and impactful thing for me to do.”
…In the summer of 2015, Altman tracked down Ilya Sutskever, a star machine-learning researcher at Google Brain. The pair had dinner at the Counter, a burger bar near Google’s headquarters. As they parted ways, Altman got into his car and thought to himself, I have got to work with that guy. He and Elon Musk spent nights and weekends courting talent. Altman drove to Berkeley to go for a walk with graduate student John Schulman; went to dinner with Stripe’s chief technology officer Greg Brockman; took a meeting with AI research scientist Wojciech Zaremba; and held a group dinner with Musk and others at the Rosewood hotel in Menlo Park, California where the idea of what a new lab might look like began to take shape. “The montage is like the beginning of a movie”, Altman says, “where you’re trying to establish this ragtag crew of slight misfits to do something crazy.”
…Altman was spending an increasing amount of time thinking about OpenAI’s financial troubles and hanging out at its office, where Brockman and Sutskever had been lobbying him to come on full time. [And presumably neglecting his Y Combinator duties, contributing to his firing.] “OpenAI had never had a CEO”, he says. “I was kind of doing it 30% of the time, but not very well.” He worried the lab was at an inflection point, and without proper leadership, “it could just disintegrate.” In March 2019, the same week the company’s restructure was announced, Altman left YC and formally came on as OpenAI CEO.
…It didn’t take long for Altman to raise $1 billion from Microsoft—a figure that has now ballooned to $13 billion. The restructuring of the company, and the tie-up with Microsoft, changed OpenAI’s complexion in substantial ways, 3 former employees say. Employees began receiving equity as a standard part of their compensation packages, which some holdovers from the nonprofit era thought created incentives for employees to maximize the company’s valuation. The amount of equity that staff were given was very generous by industry standards, according to a person familiar with the compensation program. Some employees fretted OpenAI was turning into something more closely resembling a traditional tech company. “We leave billion-dollar ideas on the table constantly”, says VP of people Diane Yoon.
…Altman recalls a breakthrough in 2019 that revealed the vast possibilities ahead. An experiment into “scaling laws” [presumably et al 2020] underpinning the relationship between the computing power devoted to training an AI and its resulting capabilities yielded a series of “perfect, smooth graphs”, he says—the kind of exponential curves that more closely resembled a fundamental law of the universe than experimental data. It was a cool June night, and in the twilight a collective realization dawned on the assembled group of researchers as they stood outside the OpenAI office: AGI was not just possible, but probably coming sooner than any of them previously thought. “We were all like, this is really going to happen, isn’t it?” Altman says. “It felt like one of these moments of science history. We know a new thing now, and we’re about to tell humanity about it.”
…On its own terms, iterative deployment worked. It handed OpenAI a decisive advantage in safety-trained models, and eventually woke up the world to the power of AI. It’s also true that it was extremely good for business. The approach bears a striking resemblance to a tried-and-tested YC strategy for startup success: building the so-called minimum viable product. Hack together a cool demo, attract a small group of users who love it, and improve based on their feedback. Put things out into the world. And eventually—if you’re lucky enough and do it right—that will attract large groups of users, light the fuse of a media hype cycle, and allow you to raise huge sums. This was part of the motivation, Brockman tells Time. “We knew that we needed to be able to raise additional capital”, he says. “Building a product is actually a pretty clear way to do it.”
Some worried that iterative deployment would accelerate a dangerous AI arms race, and that commercial concerns were clouding OpenAI’s safety priorities. Several people close to the company thought OpenAI was drifting away from its original mission. “We had multiple board conversations about it, and huge numbers of internal conversations”, Altman says. But the decision was made. In 2021, 7 staffers who disagreed quit to start a rival lab called Anthropic, led by Dario Amodei, OpenAI’s top safety researcher.
…Suddenly, OpenAI was the hottest startup in Silicon Valley. In 2022, OpenAI brought in $28 million in revenue; this year it raked in $100 million a month. The company embarked on a hiring spree, more than doubling in size. In March, it followed through on Altman’s plan to release GPT-4. The new model far surpassed ChatGPT’s capabilities—unlike its predecessor, it could describe the contents of an image, write mostly reliable code in all major programming languages, and ace standardized tests. Billions of dollars poured into competitors’ efforts to replicate OpenAI’s successes. “We definitely accelerated the race, for lack of a more nuanced phrase”, Altman says. [This is the same thing Toner said in the paper he was criticizing.]
The CEO was suddenly a global star. He seemed unusually equipped to navigate the different factions of the AI world. “I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that”, Altman told lawmakers at a US Senate hearing in May. That month, Altman embarked on a world tour, including stops in Israel, India, Japan, Nigeria, South Korea, and the UAE. Altman addressed a conference in Beijing via video link. So many government officials and policy-makers clamored for an audience that “we ended up doing twice as many meetings than were scheduled for any given day”, says head of global affairs Anna Makanju. AI soared up the policy agenda: there was a White House Executive Order, a global AI Safety Summit in the UK, and attempts to codify AI standards in the UN, the G-7, and the African Union.
By the time Altman took the stage at OpenAI’s developer conference in November, it seemed as if nothing could bring him down.
View HTML: