“OpenAI Researchers Warned Board of AI Breakthrough ahead of CEO Ouster, Sources Say”, Anna Tong, Jeffrey Dastin, Krystal Hu2023-11-23 (, ; backlinks)⁠:

Ahead of OpenAI CEO Sam Altman’s 4 days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

The previously unreported letter and AI algorithm were key developments before the board’s ouster of Altman, the poster child of generative AI, the two sources said.

…The sources cited the letter as one factor among a longer list of grievances by the board leading to Altman’s firing, among which were concerns over commercializing advances before understanding the consequences. Reuters was unable to review a copy of the letter. The staff who wrote the letter did not respond to requests for comment.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged [the existence of media reports] in an internal message to staffers a project called Q and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

…Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students [presumably GSM8K; cf. “Let’s Verify Step by Step”/“Improving mathematical reasoning with process supervision”], acing such tests made researchers very optimistic about Q’s future success, the source said. [Unclear why: was it bootstrapping from nothing, or were there scaling laws?]

…In their letter to the board, researchers flagged AI’s prowess and potential danger, the sources said without specifying the exact safety concerns noted in the letter. There has long been discussion among computer scientists about the danger posed by highly intelligent machines, for instance if they might decide that the destruction of humanity was in their interest. Researchers have also flagged work by an “AI scientist” team, the existence of which multiple sources confirmed. The group, formed by combining earlier “Code Gen” and “Math Gen” teams, was exploring how to optimize existing AI models to improve their reasoning and eventually perform scientific work, one of the people said.

…In addition to announcing a slew of new tools in a demonstration this month, Altman last week teased at a summit of world leaders in San Francisco that he believed major advances were in sight.

4× now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime.

He said at the Asia-Pacific Economic Cooperation summit.

[Altman 2023-11-17: “I think people are viewing these systems, correctly, as tools. Is this a tool we’ve built or a creature we’ve built? It’s more of a tool than a creature.” ]

A day later, the board fired Altman.