“OpenAI Expands Lobbying Team to Influence Regulation: ChatGPT Maker Beefs up Global Affairs Unit As Politicians Push for New Laws That Could Constrain Powerful AI Models”, Cristina Criddle, Javier Espinoza2024-06-13 ()⁠:

OpenAI is building an international team of lobbyists as it seeks to influence politicians and regulators who are increasing their scrutiny over powerful artificial intelligence. The San Francisco-based start-up told the Financial Times it has expanded the number of staff on its global affairs team from 3 at the start of 2023 to 35. The company aims to build that up to 50 by the end of 2024.

…While forming a small part of OpenAI’s 1,200 employees, the global affairs department is the company’s most international unit, strategically positioned in locations where AI legislation is advanced. This includes stationing staff in Belgium, the UK, Ireland, France, Singapore, India, Brazil and the US.

However, OpenAI remains behind its Big Tech rivals in this outreach. According to public filings in the US, Facebook spent a record $7.6m engaging with the US government in the first quarter of this year, while Google spent $3.1m and OpenAI $340,000. Regarding AI-specific advocacy, Facebook has named 15 lobbyists, Google has 5 while OpenAI has only two.

…OpenAI’s global affairs unit does not deal with some of the most fraught regulatory cases, however. That task goes to its legal team, which handles issues related to UK and US regulators’ review of its $18bn alliance with Microsoft; the US Securities and Exchange Commission investigation into whether chief executive Sam Altman misled investors during his brief ousting by the board in November; and the US Federal Trade Commission’s consumer protection probe into the company.

Instead, OpenAI’s lobbyists focus on the spread of AI legislation. The UK, the US and Singapore are among many countries dealing with how to govern AI and consulting closely with OpenAI and other tech companies on proposed regulations. The company was involved in the discussions around the EU’s AI Act, approved this year, one of the most advanced pieces of legislation in seeking to regulate powerful AI models.

OpenAI was among AI companies that argued some of its models should not be considered among those that provide a “high risk” in early drafts of the act and would therefore be subject to tougher rules, according to 3 people involved in the negotiations. Despite this push, the company’s most capable models will fall under the remit of the act. OpenAI also argued against the EU’s push to examine all data given to its foundation models, according to people familiar with the negotiations. The company told the FT that pre-training data—the data sets used to give large language models a broad understanding of language or patterns—should be outside the scope of regulation as it was a poor way of understanding an AI system’s outputs. Instead, it proposed the focus should be on post-training data used to fine-tune models for a particular task.

The EU decided that, for high-risk AI systems, regulators can still request access to the training data to ensure it is free of errors and bias.

…Since the EU’s law was approved, OpenAI hired Chris Lehane, who worked for President Bill Clinton, Al Gore’s presidential campaign and was Airbnb’s policy chief as vice-president of public works. Lehane will work closely with Makanju and her team. OpenAI also recently poached Jakob Kucharczyk, a former competition lead at Facebook. Sandro Gianella, head of European policy and partnerships, joined in June last year after working at Google and Stripe, while James Hairston, head of international policy and partnerships, joined from Facebook in May last year.

The company was recently involved in a series of discussions with policymakers in the US and other markets around OpenAI’s Voice Engine model, which can clone and create custom voices, leading to the company narrowing its release plans after concerns over risks of how it might be used in the context of global elections this year. The team has been running workshops in countries facing elections this year, such as Mexico and India, and publishing guidance on misinformation. In autocratic countries, OpenAI grants one-to-one access to its models to “trusted individuals” in areas where it deems it is not safe to release the products.

…However, some industry figures are critical of OpenAI’s lobbying expansion. “Initially, OpenAI recruited people deeply involved in AI policy and specialists, whereas now they are just hiring run-of-the-mill tech lobbyists, which is a very different strategy”, said one person who has directly engaged with OpenAI on creating legislation. “They’re just wanting to influence legislators in ways that Big Tech has done for over a decade.”