“Sam Altman Confronted a Member over a Research Paper That Discussed the Company, While Directors Disagreed for Months about Who Should Fill Board Vacancies”, Cade Metz, Tripp Mickle, Mike Isaac2023-11-21 (, ; backlinks)⁠:

Before Sam Altman was ousted from OpenAI last week, he and the company’s board of directors had been bickering for more than a year. The tension got worse as OpenAI became a mainstream name thanks to its popular ChatGPT chatbot.

…Vacancies exacerbated the board’s issues. This year, it disagreed over how to replace 3 departing directors: Reid Hoffman, the LinkedIn founder and a Microsoft board member; Shivon Zilis, director of operations at Neuralink, a company started by Elon Musk to implant computer chips in people’s brains; and Will Hurd, a former Republican congressman from Texas.

After vetting 4 candidates for one position, the remaining directors couldn’t agree on who should fill it, said the two people familiar with the board’s deliberations. [WaPo: “the group’s vote was rooted in worries he was trying to avoid any checks on his power at the company—a trait evidenced by his unwillingness to entertain any board makeup that wasn’t heavily skewed in his favor.”] The stalemate hardened the divide between Altman and Greg Brockman and other board members. [The vacancies left the board exactly balanced between Altman/Brockman/Sutskever vs Toner/D’Angelo/McCauley.]

…At one point, Altman, the chief executive, made a move to push out one of the board’s members [Helen Toner] because he thought a research paper she had co-written was critical of the company.

Another member, Ilya Sutskever, thought Altman was not always being honest when talking with the board. [over GPT-4 red-teaming, Q, and Superalignment quotas?] And some board members worried that Altman was too focused on expansion while they wanted to balance that growth with AI safety.

…Among the tensions leading up to Altman’s ouster and quick return involved his conflict with Helen Toner, a board member and a director of strategy at Georgetown University’s Center for Security and Emerging Technology (CSET). A few weeks before Altman’s firing, he met with Ms. Toner to discuss a paper she had co-written [by Andrew Imbrie, Owen J. Daniels, & Helen Toner] for the Georgetown center. Mr. Altman complained that the research paper seemed to criticize OpenAI’s efforts to keep its AI technologies safe while praising the approach taken by Anthropic, a company that has become OpenAI’s biggest rival, according to an email that Altman wrote to colleagues and that was viewed by The New York Times…In the email, Altman said that he had reprimanded Ms. Toner for the paper and that it was dangerous to the company, particularly at a time, he added, when the Federal Trade Commission was investigating OpenAI over the data used to build its technology.

Ms. Toner defended it as an academic paper that analyzed the challenges that the public faces when trying to understand the intentions of the countries and companies developing AI But Altman disagreed.

“I did not feel we’re on the same page on the damage of all this”, he wrote in the email. “Any amount of criticism from a board member carries a lot of weight.” [No one appears to have read the paper in question, and I cannot find any media reports which might have caused any ‘damage’ that Altman might be referring to, nor have any emerged since, contrary to Altman’s claim.] Senior OpenAI leaders, including Sutskever, who is deeply concerned that AI could one day destroy humanity, later discussed whether Ms. Toner should be removed, a person involved in the conversations said…[They do not seem to have proposed Toner step down in favor of one of the many candidates the safety directors had favored but Altman had vetoed. He also appears to have told board members opposed to firing her that other board members supported it, when they did not.] Hours after Altman was ousted, OpenAI executives confronted the remaining board members during a video call, according to 3 people who were on the call. During the call, Jason Kwon, OpenAI’s chief strategy officer, said the board was endangering the future of the company by pushing out Altman. This, he said, violated the members’ responsibilities. Ms. Toner disagreed. The board’s mission was to ensure that the company creates artificial intelligence that “benefits all of humanity”, and if the company was destroyed, she said, that could be consistent with its mission. [Destroying or merging are acceptable possibilities for OA according to the OA Charter that Sam Altman helped write: “We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome…if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project.”] In the board’s view, OpenAI would be stronger without Altman.

[The OA executives who discussed how to oversee the board overseeing them do not seem to have discussed whether Altman should be discussing the board with them, or whether other board members like Sutskever or Altman should have been removed from the board for the many controversies their Twitter or Reddit comments have caused, which were larger than the non-controversy over the Toner-co-authored CSET paper.]

But shortly after those discussions, Sutskever did the unexpected: He sided with board members to oust Altman, according to two people familiar with the board’s deliberations. [See WSJ] The statement he read to Altman said that Altman was being fired because he wasn’t “consistently candid in his communications with the board.”

Mr. Sutskever’s frustration with Altman echoed what had happened in 2021 when another senior AI scientist [Dario Amodei] left OpenAI to form Anthropic. That scientist and other researchers went to the board to try to push Altman out. After they failed, they gave up and departed, according to 3 people familiar with the attempt to push Altman out. “After a series of reasonably amicable negotiations, the co-founders of Anthropic were able to negotiate their exit on mutually agreeable terms”, an Anthropic spokeswoman, Sally Aldous, said. In a second statement, Anthropic added that there was “no attempt to ‘oust’ Sam Altman at the time the founders of Anthropic left OpenAI.”

[cf. Sam Altman’s ouster from Y Combinator]


On Sunday, Sutskever was urged at OpenAI’s office to reverse course by Brockman’s wife, Anna Brockman, according to two people familiar with the exchange. Hours later, he signed a letter with other employees that demanded the independent directors resign. The confrontation between Sutskever and Mrs. Brockman was reported earlier by The Wall Street Journal.

At 5:15 a.m. on Monday, he posted on Twitter, that “I deeply regret my participation in the board’s actions.”