“OpenAI Promised 20% of Its Computing Power to Combat the Most Dangerous Kind of AI—But Never Delivered, Sources Say”, 2024-05-21 ():
…It was a task so important that the company said in its announcement that it would commit “20% of the compute we’ve secured to date over the next 4 years” to the effort. But a half-dozen sources familiar with the Superalignment team’s work said that the group was never allocated this compute. Instead, it received far less in the company’s regular compute allocation budget, which is reassessed quarterly.
One source familiar with the Superalignment team’s work said that there were never any clear metrics around exactly how the 20% amount was to be calculated, leaving it subject to wide interpretation. For instance, the source said the team was never told whether the promise meant “20% each year for 4 years” or “5% a year for 4 years” or some variable amount that could wind up being “1% or 2% for the first 3 years, and then the bulk of the commitment in the 4th year.” In any case, all the sources Fortune spoke to for this story confirmed that the Superalignment team was never given anything close to 20% of OpenAI’s secured compute as of July 2023.
OpenAI researchers can also make requests for what is known as “flex” compute—access to additional GPU capacity beyond what has been budgeted—to deal with new projects between the quarterly budgeting meetings. But flex requests from the Superalignment team were routinely rejected by higher-ups, these sources said.
Bob McGrew, OpenAI’s vice president of research, was the executive who informed the team that these requests were being declined, the sources said, but others at the company, including CTO Mira Murati, were involved in making the decisions. Neither McGrew nor Murati responded to requests to comment for this story.
While the team did carry out some research—it released a paper detailing its experiments in successfully getting a less powerful AI model to control a more powerful one in December 2023—the lack of compute stymied the team’s more ambitious ideas, the source said.
After resigning, Jan Leike on Friday [2024-05-17] published a series of posts on Twitter in which he criticized his former employer, saying “safety culture and processes have taken a backseat to shiny products.” He also said that “over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”
5 sources familiar with the Superalignment team’s work backed up Leike’s account, saying that the problems with accessing compute worsened in the wake of the pre-Thanksgiving showdown between Altman and the board of the OpenAI nonprofit foundation.
…One source disputed the way the other sources Fortune spoke to characterized the compute problems the Superalignment team faced, saying they predated Ilya Sutskever’s participation in the failed coup, plaguing the group from the get-go.
While there have been some reports that Sutskever was continuing to co-lead the Superalignment team remotely, sources familiar with the team’s work said this was not the case and that Sutskever had no access to the team’s work and played no role in directing the team after Thanksgiving.
With Sutskever gone, the Superalignment team lost the only person on the team who had enough political capital within the organization to successfully argue for its compute allocation, the sources said.
…The people who spoke to Fortune did so anonymously, either because they said they feared losing their jobs, or because they feared losing vested equity in the company, or both. Employees who have left OpenAI have been forced to sign separation agreements that include a strict non-disparagement clause that says the company can claw back their vested equity if they criticize the company publicly, or if they even acknowledge the clause’s existence. And employees have been told that anyone who refuses to sign the separation agreement will forfeit their equity as well.