Every mass casualty event in the United States β at least every highly publicised one β seems to lead to a conversation about censorship. Itβs not just that we have a gun problem in the US, the activists insist: we also have a media problem. In the 90s, we speculated about the impact that heavy metal, gangster rap, and violent video games had on our teenage boys, each one a potential tinder box. As we rolled into the 2000s, everything from Marilyn Manson to South Park to Grand Theft Auto was interrogated as potentially dangerous.
Today, Manson is off the hook, and now itβs social media in the spotlight. The mainstream consensus is that pernicious algorithms promote racially motivated disinformation, misinformation, and malinformation. If we donβt censor content effectively enough, itβs argued, your son or daughter may log onto Facebook to message a friend and log off a member of the Atomwaffen or QAnon. At a minimum, they just might become a Joe Rogan listener. But what counts as dis-, mis-, or malinformation remains nebulous. Itβs like the American Right and porn: you know it when you see it, just get it off our screens.
In recent years, concern has also grown about platforms like Discord, which hosts private, invitation-only conversations for both groups and individuals, similar to an old-school chatroom. Censoring these types of communications is even more complex than knowing who or what to de-platform, the ethics of surveilling private conversations aside. A recent article for The New Yorker summarised it as a “game of whack-a-moleβ, which sounds about right.
There are tech companies working on tools to help scale moderation, like Surge AI, which uses Natural Language Processing to help platforms better understand the context in which particular things are posted, but that tech is still in the early stages. Plus, if TikTok has taught us anything, text-based community standards are relatively easy to evade with enough coordination. For example, on TikTok, βsexβ becomes βseggsβ, βsuicideβ, becomes βunalivedβ, so on and so forth. As the moderation team is alerted to these euphemisms, new ones spring up in their place. All this to say: people who want to talk about expurgated topics will.
Other companies, like Facebook, hire contractors to act as content moderators to manually report and remove content. This might be effective on some level, but it takes a psychological toll on people. Posts start to fall through the cracks, and users still report that both credible threats of violence and (relatively) more benign content like spammers and fraudsters remain ignored.
Meanwhile, a grassroots attempt at surveillance comes from activists who infiltrate groups they believe may be at risk for violence. (Hence the paranoia about βfedsβ that feels endemic in online Right-wing communities.) Unicorn Riot, a Left-leaning media group, has an entire vertical called βDiscordLeaksβ, which is exactly what the name suggests. They post chat logs from Discord servers they believe are populated by far-Right extremists, who may or may not be at risk of committing acts of terrorism, to expose them.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe