“Inside the Chaos at OpenAI: Sam Altman’s Weekend of Shock and Drama Began a Year Ago, With the Release of ChatGPT”, 2023-11-19 (; backlinks):
…This tenuous equilibrium broke one year ago almost to the day, according to current and former employees, thanks to the release of the very thing that brought OpenAI to global prominence: ChatGPT. From the outside, ChatGPT looked like one of the most successful product launches of all time. It grew faster than any other consumer app in history, and it seemed to single-handedly redefine how millions of people understood the threat—and promise—of automation. But it sent OpenAI in polar-opposite directions, widening and worsening the already present ideological rifts. ChatGPT supercharged the race to create products for profit as it simultaneously heaped unprecedented pressure on the company’s infrastructure and on the employees focused on assessing and mitigating the technology’s risks. This strained the already tense relationship between OpenAI’s factions—which Altman referred to, in a 2019 staff email, as “tribes.”
In conversations between The Atlantic and 10 current and former employees at OpenAI, a picture emerged of a transformation at the company that created an unsustainable division among leadership.
(We agreed not to name any of the employees—all told us they fear repercussions for speaking candidly to the press about OpenAI’s inner workings.)
Together, their accounts illustrate how the pressure on the for-profit arm to commercialize grew by the day, and clashed with the company’s stated mission, until everything came to a head with ChatGPT and other product launches that rapidly followed. “After ChatGPT, there was a clear path to revenue and profit”, one source told us. “You could no longer make a case for being an idealistic research lab. There were customers looking to be served here and now.”
…In the fall of 2022, before the launch of ChatGPT, all hands were on deck at OpenAI to prepare for the release of its most powerful large language model to date, GPT-4…In the midst of it all, rumors began to spread within OpenAI that its competitors at Anthropic were developing a chatbot of their own [Claude]. The rivalry was personal: Anthropic had formed after a faction of employees left OpenAI in 2020, reportedly because of concerns over how fast the company was releasing its products. In November, OpenAI leadership told employees that they would need to launch a chatbot in a matter of weeks, according to 3 people who were at the company. To accomplish this task, they instructed employees to publish an existing model, GPT-3.5, with a chat-based interface. Leadership was careful to frame the effort not as a product launch but as a “low-key research preview.” By putting GPT-3.5 into people’s hands, Altman and other executives said, OpenAI could gather more data on how people would use and interact with AI, which would help the company inform GPT-4’s development. The approach also aligned with the company’s broader deployment strategy, to gradually release technologies into the world for people to get used to them. Some executives, including Altman, started to parrot the same line: OpenAI needed to get the “data flywheel” going.
A few employees expressed discomfort about rushing out this new conversational model. The company was already stretched thin by preparation for GPT-4 and ill-equipped to handle a chatbot that could change the risk landscape. Just months before, OpenAI had brought online a new traffic-monitoring tool to track basic user behaviors. It was still in the middle of fleshing out the tool’s capabilities to understand how people were using the company’s products, which would then inform how it approached mitigating the technology’s possible dangers and abuses. Other employees felt that turning GPT-3.5 into a chatbot would likely pose minimal challenges, because the model itself had already been sufficiently tested and refined. The company pressed forward and launched ChatGPT on November 30. It was such a low-key event that many employees who weren’t directly involved, including those in safety functions, didn’t even realize it had happened. Some of those who were aware, according to one employee, had started a betting pool, wagering how many people might use the tool during its first week. The highest guess was 100,000 users. OpenAI’s president tweeted that the tool hit 1 million within the first 5 days. The phrase “low-key research preview” became an instant meme within OpenAI; employees turned it into laptop stickers.
ChatGPT’s runaway success placed extraordinary strain on the company. Computing power from research teams was redirected to handle the flow of traffic. As traffic continued to surge, OpenAI’s servers crashed repeatedly; the traffic-monitoring tool also repeatedly failed. Even when the tool was online, employees struggled with its limited functionality to gain a detailed understanding of user behaviors.
Safety teams within the company pushed to slow things down. These teams worked to refine ChatGPT to refuse certain types of abusive requests and to respond to other queries with more appropriate answers. But they struggled to build features such as an automated function that would ban users who repeatedly abused ChatGPT. In contrast, the company’s product side wanted to build on the momentum and double down on commercialization. Hundreds more employees were hired to aggressively grow the company’s offerings. In February, OpenAI released a paid version of ChatGPT; in March, it quickly followed with an API tool, or application programming interface, that would help businesses integrate ChatGPT into their products. Two weeks later, it finally launched GPT-4.
…The slew of new products made things worse, according to 3 employees who were at the company at that time. Functionality on the traffic-monitoring tool continued to lag severely, providing limited visibility into what traffic was coming from which products that ChatGPT and GPT-4 were being integrated into via the new API tool, which made understanding and stopping abuse even more difficult. At the same time, fraud began surging on the API platform as users created accounts at scale, allowing them to cash in on a $20 credit for the pay-as-you-go service that came with each new account. Stopping the fraud became a top priority to stem the loss of revenue and prevent users from evading abuse enforcement by spinning up new accounts: Employees from an already small trust-and-safety staff were reassigned from other abuse areas to focus on this issue. Under the increasing strain, some employees struggled with mental-health issues. Communication was poor. Co-workers would find out that colleagues had been fired only after noticing them disappear on Slack.
The release of GPT-4 also frustrated the alignment team, which was focused on further-upstream AI-safety challenges, such as developing various techniques to get the model to follow user instructions and prevent it from spewing toxic speech or “hallucinating”—confidently presenting misinformation as fact. Many members of the team, including a growing contingent fearful of the existential risk of more-advanced AI models, felt uncomfortable with how quickly GPT-4 had been launched and integrated widely into other products. They believed that the AI safety work they had done was insufficient.
…These once again had major problems: OpenAI experienced a series of outages, including a massive one across ChatGPT and its APIs, according to company updates.
…In July, OpenAI announced the creation of a so-called Superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.
…He told employees that the company’s models were still early enough in development that OpenAI ought to commercialize and generate enough revenue to ensure that it could spend [later] without limits on alignment and safety concerns.
…Through it all, Altman pressed onward. In the days before his firing, he was drumming up hype about OpenAI’s continued advances. The company had begun to work on GPT-5, he told the Financial Times, before alluding days later to something incredible in store at the APEC summit. “Just in the last couple of weeks, I have gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward”, he said. “Getting to do that is a professional honor of a lifetime.”
…The tensions boiled over at the top. As Altman and OpenAI President Greg Brockman encouraged more commercialization, the company’s chief scientist, Ilya Sutskever, grew more concerned about whether OpenAI was upholding the governing nonprofit’s mission to create beneficial AGI…The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company.
See Also: