Bertrand Russell noted how people often describe the same factual behavior using emotionally opposite language depending on perspective — e.g. I am firm, you are obstinate, he is pigheaded. This framing tactic is now called a Russell Conjugation, and once you start noticing them, they’re everywhere — especially in politics and media.
For the past year and a half, I’ve been training a finetuned ChatGPT model, and building a tool to automatically highlight Russell Conjugations in text and suggest emotionally opposite alternatives. It functions as a fact-independent bias reverser — showing where emotional spin might exist, and how the opposite side might see an issue, regardless of the factual accuracy of specific claims. I find it valuable especially when trying to parse tribal political language, as very often different sides of political divides will use words that feel completely different to describe the same things.
The word “pigheaded” gets flagged negative, and “expert consensus” gets flagged positive, and you’re given alternatives which describe the same or very similar behaviors but cast them in a different light.
I am new to LessWrong, but I noticed there has been some discussion of this topic on this platform before. @Daniel Kokotajlo created a thread of Russell Conjugations which I unfortunately was unaware of until just a few weeks ago, though I am now working some of those examples into my next training set.
I think this is a fascinating and important aspect of modern communication, and I’d like to try to spread the word as much as possible.
I’d love to hear any thoughts. Let me know if it highlights anything interesting (or weird). I’m still refining the model and always looking to improve it.
The Russell Conjugation Illuminator
Link post
Bertrand Russell noted how people often describe the same factual behavior using emotionally opposite language depending on perspective — e.g. I am firm, you are obstinate, he is pigheaded. This framing tactic is now called a Russell Conjugation, and once you start noticing them, they’re everywhere — especially in politics and media.
For the past year and a half, I’ve been training a finetuned ChatGPT model, and building a tool to automatically highlight Russell Conjugations in text and suggest emotionally opposite alternatives. It functions as a fact-independent bias reverser — showing where emotional spin might exist, and how the opposite side might see an issue, regardless of the factual accuracy of specific claims. I find it valuable especially when trying to parse tribal political language, as very often different sides of political divides will use words that feel completely different to describe the same things.
Here’s an example I pasted into the tool: “The senator remained pigheaded despite expert consensus.” https://russellconjugations.com/conj/26d26d8dc45728517a97483248845aab
The word “pigheaded” gets flagged negative, and “expert consensus” gets flagged positive, and you’re given alternatives which describe the same or very similar behaviors but cast them in a different light.
I am new to LessWrong, but I noticed there has been some discussion of this topic on this platform before. @Daniel Kokotajlo created a thread of Russell Conjugations which I unfortunately was unaware of until just a few weeks ago, though I am now working some of those examples into my next training set.
I think this is a fascinating and important aspect of modern communication, and I’d like to try to spread the word as much as possible.
If you’re curious, you can try it out here: https://russellconjugations.com
I’d love to hear any thoughts. Let me know if it highlights anything interesting (or weird). I’m still refining the model and always looking to improve it.
Thank you.