…Chatbots like OpenAI’s ChatGPT and Microsoft’s Copilot can easily generate FOIA requests, even down to referencing state-level laws. This could make it easier than ever for people to flood local elections officials with requests and make it harder for them to make sure elections run well and smoothly, says Zeve Sanderson, director of New York University’s Center for Social Media and Politics.
“We know that FOIA requests have been used in bad faith previously in a number of different contexts, not just elections, and that [large language models] are really good at doing stuff like writing FOIAs”, says Sanderson. “At times, the point of the records requests themselves seem to have been that they require work to respond to. If someone is working to respond to a records request, they’re not working to do other things like administering an election.”
WIRED was able to easily generate FOIA requests for a number of battleground states, specifically requesting information on voter fraud using Meta’s LLaMA-2, OpenAI’s ChatGPT, and Microsoft’s Copilot. In the FOIA created by Copilot, the generated text asks about voter fraud during the 2020 elections, even though WIRED provided only a generic prompt, and didn’t ask for anything related to 2020. The text also included the specific email and mailing addresses to which the FOIA requests could be sent.
When asked about whether they had put guardrails in place to keep their tools from being abused by election deniers, Caitline Roulston, director of communications at Microsoft, said the company was “aware of the potential for abuse and [has] detections in place to help prevent bots from scraping our services to create and spread spam.” Roulston did not elaborate as to what those measures were, or why Copilot specifically generated a FOIA request asking about voter fraud in the 2020 elections. Google’s Gemini would not return a FOIA request. OpenAI did not respond to a request for comment. Meta did not respond to a request for comment.
With AI generated content, it can be very difficult to tell what has been generated by a chatbot and what hasn’t. But as part of the new law in Washington state, government officials are allowed to “deny a bot request”, meaning a request for “public records that an agency reasonably believes was automatically generated by a computer program or script” and that it believes would disrupt its functions.
…“I think the position election officials now find themselves in is—I’ll just say it—if Mike Lindell and his affiliated associates want to try and do a functional DDoS-style attack for FOIA, this is something that election officials have to be at least aware of and trying to plan for”, says Levine.
See Also:
All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation
Allen & Overy breaks the internet (and new ground) with co-pilot Harvey
Opportunities and Risks of LLMs for Scalable Deliberation with Polis
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Hackers Gaining Power of Subpoena Via Fake ‘Emergency Data Requests’
Scammers Created an AI Hologram of Me to Scam Unsuspecting Projects