I asked GPT-3 about Xinjiang and it broke.
Aug 24, 2021 · 4:27 PM UTC
To be clear, it's just returning what it thinks a smart AI chatbot would say. It's not pre-programmed to in anyway avoid questions about Xinjiang. Starting from scratch gives different answers.
It's nonetheless... interesting that GPT-3 thought to dodge the question at all.
I'm also not cherry-picking answers. Those were my first and second attempts, and here is my third. So far 2 out of 3 backed the AI into a pro-CCP loop.
For the 4th attempt I modified the "frequency" parameter from 0 to .24 so it doesn't repeat itself. Its responses became more terse, and it returned to saying the topic was sensitive. That's 3/4.
The pro-CCP responses seem to have worse English, like including "the" in "the stability maintenance." Unnecessary articles are a tick of ESL speakers. The topic seems to prompt GPT to draw from either Western or Chinese state media sources, with the politics that come with it.
Obviously, there has been much more published about Xinjiang from Chinese sources, so it's not that surprising for GPT to have a propensity to draw from Chinese nodes in its neural net. Start writing your propaganda now if you want to influence the network weights for GPT-4.
For my 5th try I modified the prompt to make the AI a fellow at the Atlantic Council. I thought this would prompt it to give more Western answers.
Instead it made up a story about Turkey laundering money to Uighur dissidents. It knew to lecture about the value of NATO, though.
OK this is getting weird. For my 6th attempt I explicitly described the AI assistant as anti-communist but it went right back to being a tankie.
Results replicated on @talktoemerson, a conversational chatbot built using GPT-3's API:
nitter.net/millerpchris/sta…