“Reddit and the Struggle to Detoxify the Internet: How Do We Fix Life Online without Limiting Free Speech?”, Andrew Marantz2018-03-12 (; similar)⁠:

Although redditors didn’t yet know it, Huffman could edit any part of the site. He wrote a script that would automatically replace his username with those of The_Donald’s most prominent members, directing the insults back at the insulters in real time: in one comment, “Fuck u/Spez” became “Fuck u/Trumpshaker”; in another, “Fuck u/Spez” became “Fuck u/MAGAdocious.” The_Donald’s users saw what was happening, and they reacted by spinning a conspiracy theory that, in this case, turned out to be true. “Manipulating the words of your users is fucked”, a commenter wrote. “Even Facebook and Twitter haven’t stooped this low.” “Trust nothing.”

…In October, on the morning the new policy was rolled out, Ashooh sat at a long conference table with a dozen other employees. Before each of them was a laptop, a mug of coffee, and a few hours’ worth of snacks. “Welcome to the Policy Update War Room”, she said. “And, yes, I’m aware of the irony of calling it a war room when the point is to make Reddit less violent, but it’s too late to change the name.” The job of policing Reddit’s most pernicious content falls primarily to three groups of employees—the community team, the trust-and-safety team, and the anti-evil team—which are sometimes described, respectively, as good cop, bad cop, and RoboCop. Community stays in touch with a cross-section of redditors, asking them for feedback and encouraging them to be on their best behavior. When this fails and redditors break the rules, trust and safety punishes them. Anti-evil, a team of back-end engineers, makes software that flags dodgy-looking content and sends that content to humans, who decide what to do about it.

Ashooh went over the plan for the day. All at once, they would replace the old policy with the new policy, post an announcement explaining the new policy, warn a batch of subreddits that they were probably in violation of the new policy, and ban another batch of subreddits that were flagrantly, irredeemably in violation. I glanced at a spreadsheet with a list of the hundred and nine subreddits that were about to be banned (r/KKK, r/KillAllJews, r/KilltheJews, r/KilltheJoos), followed by the name of the employee who would carry out each deletion, and, if applicable, the reason for the ban (“mostly just swastikas?”). “Today we’re focusing on a lot of Nazi stuff and bestiality stuff”, Ashooh said. “Context matters, of course, and you shouldn’t get in trouble for posting a swastika if it’s a historical photo from the 1936 Olympics, or if you’re using it as a Hindu symbol. But, even so, there’s a lot that’s clear-cut.” I asked whether the same logic—that the Nazi flag was an inherently violent symbol—would apply to the Confederate flag, or the Soviet flag, or the flag under which King Richard fought the Crusades. “We can have those conversations in the future”, Ashooh said. “But we have to start somewhere.”

At 10AM, the trust-and-safety team posted the announcement and began the purge. “Thank you for letting me do DylannRoofInnocent”, one employee said. “That was one of the ones I really wanted.”

“What is ReallyWackyTicTacs?” another employee asked, looking down the list. “Trust me, you don’t want to know”, Ashooh said. “That was the most unpleasant shit I’ve ever seen, and I’ve spent a lot of time looking into Syrian war crimes.”

Some of the comments on the announcement were cynical. “They don’t actually want to change anything”, one redditor wrote, arguing that the bans were meant to appease advertisers. “It was, in fact, never about free speech, it was about money.” One trust-and-safety manager, a young woman wearing a leather jacket and a ship captain’s cap, was in charge of monitoring the comments and responding to the most relevant ones. “Everyone seems to be taking it pretty well so far”, she said. “There’s one guy, freespeechwarrior, who seems very pissed, but I guess that makes sense, given his username.” “People are making lists of all the Nazi subs getting banned, but nobody has noticed that we’re banning bestiality ones at the same time”, Ashooh said…“I’m going to get more cheese sticks”, the woman in the captain’s cap said, standing up. “How many cheese sticks is too many in one day? At what point am I encouraging or glorifying violence against my own body?” “It all depends on context”, Ashooh said.

I understood why other companies had been reluctant to let me see something like this. Never again would I be able to read a lofty phrase about a social-media company’s shift in policy—“open and connected”, or “encouraging meaningful interactions”—without imagining a group of people sitting around a conference room, eating free snacks and making fallible decisions. Social networks, no matter how big they get or how familiar they seem, are not ineluctable forces but experimental technologies built by human beings. We can tell ourselves that these human beings aren’t gatekeepers, or that they have cleansed themselves of all bias and emotion, but this would have no relation to reality. “I have biases, like everyone else”, Huffman told me once. “I just work really hard to make sure that they don’t prevent me from doing what’s right.”