“Before OpenAI Ousted Altman, Employees Disagreed Over AI ‘Safety’”, 2023-11-17 ():
OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing artificial intelligence safely enough, according to people with knowledge of the situation…At least two employees asked Ilya Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover”, according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.
…“You can call it this way”, Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.
When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
While the 6-member board that fired Altman didn’t explain the reasons for the move, other than saying Altman “was not consistently candid in his communications with the board”, safety was a big theme in the company’s internal damage control following the firing. (Altman didn’t participate in the vote to oust him, the company told staff.)
…But in 2019, when Altman became CEO of OpenAI, he helped form a for-profit entity governed by the OpenAI non-profit so that it could raise money from outside investors like Microsoft in order to have enough servers to train the best AI. His solution for limiting the power of for-profit greed: a theoretical cap on the profits the company could generate for its principals and investors.
But concerns about AI safety divided the startup’s leaders. In late 2020, a group of employees split off from OpenAI to launch their own startup, Anthropic, because of differences about the company’s commercial strategy and the pace at which it released its technology. [I have always heard there was more to this than ‘differences’, and it was a suspicious incident (when groups of executives quit en masse without explanation, it’s often a bad sign, eg. early Theranos defections, or a similar group of executives quitting Sam Bankman-Fried’s Alameda Trading firm years before it exploded, while the later (extremely-loyal & enthusiastic) FTX employees were completely deceived). The NYT reports an attempt to ouster Altman by Dario Amodei et al] In order to ensure the startup’s AI development wouldn’t be influenced by financial incentives, Anthropic formed an independent, 5-person group that can hire and fire the [public-benefit corporation] company’s board.
…The company this summer established a team, co-led by Sutskever and “alignment” researcher Jan Leike, to work on technical solutions to prevent its AI systems from running rogue. In a blog post, OpenAI said it would dedicate a fifth of its computing resources [quite a lot of compute to promise to spend for no immediate commercial return, eh?] to solving threats from “superintelligence”, which Sutskever and Leike wrote “could lead to the disempowerment of humanity or even human extinction.”
…Sutskever likely has many followers within the company. Former employees describe him as a well-respected and hands-on leader who’s crucial for guiding the startup’s frontier tech.
The blog post announcing Altman’s firing on Friday concluded by stating the board’s responsibility was to preserve the company’s charter, which says the board must avoid “enabling AI or AGI that harm humanity or unduly concentrate power”, and doing “the research required to make AGI safe.” It also said the board’s “primary fiduciary duty is to humanity.”