“How a Fervent Belief Split Silicon Valley—And Fueled the Blowup at OpenAI: Sam Altman’s Firing Showed the Influence of Effective Altruism and Its View That AI Development Must Slow Down; His Return Marked Its Limits”, Robert McMillan, Deepa Seetharaman2023-11-22 ()⁠:

…Altman, who was fired by the board Friday, clashed with the company’s chief scientist and board member Ilya Sutskever over AI-safety issues that mirrored effective-altruism concerns, according to people familiar with the dispute. Voting with Sutskever, who led the coup, were board members Tasha McCauley, a tech executive and board member for the effective-altruism charity Effective Ventures, and Helen Toner, an executive with Georgetown University’s Center for Security and Emerging Technology, which is backed by a philanthropy dedicated to effective-altruism causes. They made up 3⁄4 votes needed to oust Altman, people familiar with the matter said. The board said he failed to be “consistently candid.”…Altman toured the world this spring warning that AI could cause serious harm. He also called effective altruism an “incredibly flawed movement” that showed “very weird emergent behavior.”

…This account of the movement is based on interviews with more than 50 executives, researchers, investors, current and former effective-altruists, as well as public talks, academic papers and other published material from the effective-altruism community.

…At OpenAI’s holiday party last December, Sutskever addressed hundreds of employees and their guests at the California Academy of Science in San Francisco, not far from the museum’s dioramas of stuffed zebras, antelopes and lions. “Our goal is to make a mankind-loving AGI”, said Sutskever, the company’s chief scientist. “Feel the AGI”, he said. “Repeat after me. Feel the AGI.”

…OpenAI recently said it would dedicate a fifth of its computing resources over the next 4 years to what the company called “Superalignment”, an effort led by Sutskever. The team has been building, among other things, an AI-derived “scientist” that can conduct research on AI systems, people familiar with the matter said. [Q?]

Frustrated employees said attention to AGI and alignment has left fewer resources to solve more immediate issues such as developer abuse, fraud and nefarious AI uses that could affect the 2024 election. They say the resource disparity reflects the influence of effective altruism. While OpenAI is building automated tools to catch abuses, it hasn’t hired many investigators for that work, according to people familiar with the company. It also has few employees monitoring its developer platform, which is used by more than two million researchers, companies and other developers, these people said. The company has recently hired someone to consider the role of OpenAI technology in the 2024 election. Experts warn of the potential for AI-generated images to mislead voters.

…At Google, the merging this year of its two artificial intelligence units—DeepMind and Google Brain—triggered a split over how effective-altruism principles are applied, according to current and former employees. DeepMind co-founder Demis Hassabis, who has long hired people aligned with the movement, is in charge of the combined units. Google Brain employees say they have largely ignored effective altruism and instead explore practical uses of artificial intelligence and the potential misuse of AI tools, according to people familiar with the matter. One former employee compared the merger with DeepMind to a forced marriage, “making many people squirm at Brain.”