“In AI Race, Microsoft and Google Choose Speed Over Caution: Technology Companies Were Once Leery of What Some Artificial Intelligence Could Do. Now the Priority Is Winning Control of the Industry’s next Big Thing”, Nico Grant, Karen Weise2023-04-07 (, )⁠:

In March 2023, two Google employees, whose jobs are to review the company’s artificial intelligence products, tried to stop Google from launching an AI chatbot. They believed it generated inaccurate and dangerous statements. 10 months earlier, similar concerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the AI technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with a splashy event in February 2023 to reveal an AI chatbot woven into its Bing search engine. Google followed about 6 weeks later with its own chatbot, Bard.

…That competition took on a frantic tone in November when OpenAI, a San Francisco start-up working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users. The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.

The urgency to build with the new AI was crystallized in an internal email sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an “absolutely fatal error in this moment to worry about things that can be fixed later. When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product “is the long-term winner just because they got started first”, he wrote. “Sometimes the difference is measured in weeks.”

…Google released Bard after years of internal dissent over whether generative AI’s benefits outweighed the risks. It announced Meena, a similar chatbot, in 2020. But that system was deemed too risky to release, 3 people with knowledge of the process said. Those concerns were reported earlier by The Wall Street Journal.

…Concerns over larger models persisted. In January 2022, Google refused to allow another researcher, El Mahdi El Mhamdi, to publish a critical paper. Dr. El Mhamdi, a part-time employee and university professor, used mathematical theorems to warn that the biggest AI models are more vulnerable to cybersecurity attacks and present unusual privacy risks because they’ve probably had access to private data stored in various locations around the internet. Though an executive presentation later warned of similar AI privacy violations, Google reviewers asked Dr. El Mhamdi for substantial changes. He refused and released the paper through École Polytechnique. He resigned from Google this year, citing in part “research censorship.” He said modern AI’s risks “highly exceeded” the benefits. “It’s premature deployment”, he added.

…After ChatGPT’s release, Kent Walker, Google’s top lawyer, met with research and safety executives on the company’s powerful Advanced Technology Review Council. He told them that Sundar Pichai, Google’s chief executive, was pushing hard to release Google’s AI Jen Gennai, the director of Google’s Responsible Innovation group, attended that meeting. She recalled what Mr. Walker had said to her own staff. The meeting was “Kent talking at the A.T.R.C. execs, telling them, ‘This is the company priority’”, Ms. Gennai said in a recording that was reviewed by The Times. “‘What are your concerns? Let’s get in line.’” Mr. Walker told attendees to fast-track AI projects, though some executives said they would maintain safety standards, Ms. Gennai said.

Her team had already documented concerns with chatbots: They could produce false information, hurt users who become emotionally attached to them and enable “tech-facilitated violence” through mass harassment online. In March, two reviewers from Ms. Gennai’s team submitted their risk evaluation of Bard. They recommended blocking its imminent release, two people familiar with the process said. Despite safeguards, they believed the chatbot was not ready. Ms. Gennai changed that document. She took out the recommendation and downplayed the severity of Bard’s risks, the people said. Ms. Gennai said in an email to The Times that because Bard was an experiment, reviewers were not supposed to weigh in on whether to proceed. She said she “corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.”

Google said it had released Bard as a limited experiment because of those debates, and Ms. Gennai said continuing training, guardrails and disclaimers made the chatbot safer. Google released Bard to some users on March 21. The company said it would soon integrate generative AI into its search engine.

…In the fall, Microsoft started breaking up what had been one of its largest technology ethics teams. The group, Ethics and Society, trained and consulted company product leaders to design and build responsibly. In October, most of its members were spun off to other groups, according to 4 people familiar with the team. The remaining few joined daily meetings with the Bing team, racing to launch the chatbot. John Montgomery, an AI executive, told them in a December email that their work remained vital and that more teams “will also need our help.”…Microsoft has released new products every week, a frantic pace to fulfill plans that Mr. Nadella set in motion in the summer when he previewed OpenAI’s newest model.