May 24, 2022 - 10:00am

Every mass casualty event in the United States β€” at least every highly publicised one β€” seems to lead to a conversation about censorship. It’s not just that we have a gun problem in the US, the activists insist: we also have a media problem. In the 90s, we speculated about the impact that heavy metal, gangster rap, and violent video games had on our teenage boys, each one a potential tinder box. As we rolled into the 2000s, everything from Marilyn Manson to South Park to Grand Theft Auto was interrogated as potentially dangerous.

Today, Manson is off the hook, and now it’s social media in the spotlight. The mainstream consensus is that pernicious algorithms promote racially motivated disinformation, misinformation, and malinformation. If we don’t censor content effectively enough, it’s argued, your son or daughter may log onto Facebook to message a friend and log off a member of the Atomwaffen or QAnon. At a minimum, they just might become a Joe Rogan listener. But what counts as dis-, mis-, or malinformation remains nebulous. It’s like the American Right and porn: you know it when you see it, just get it off our screens.

In recent years, concern has also grown about platforms like Discord, which hosts private, invitation-only conversations for both groups and individuals, similar to an old-school chatroom. Censoring these types of communications is even more complex than knowing who or what to de-platform, the ethics of surveilling private conversations aside. A recent article for The New Yorker summarised it as a “game of whack-a-mole”, which sounds about right.

There are tech companies working on tools to help scale moderation, like Surge AI, which uses Natural Language Processing to help platforms better understand the context in which particular things are posted, but that tech is still in the early stages. Plus, if TikTok has taught us anything, text-based community standards are relatively easy to evade with enough coordination. For example, on TikTok, β€œsex” becomes β€œseggs”, β€œsuicide”, becomes β€œunalived”, so on and so forth. As the moderation team is alerted to these euphemisms, new ones spring up in their place. All this to say: people who want to talk about expurgated topics will.

Other companies, like Facebook, hire contractors to act as content moderators to manually report and remove content. This might be effective on some level, but it takes a psychological toll on people. Posts start to fall through the cracks, and users still report that both credible threats of violence and (relatively) more benign content like spammers and fraudsters remain ignored.

Meanwhile, a grassroots attempt at surveillance comes from activists who infiltrate groups they believe may be at risk for violence. (Hence the paranoia about β€œfeds” that feels endemic in online Right-wing communities.) Unicorn Riot, a Left-leaning media group, has an entire vertical called β€œDiscordLeaks”, which is exactly what the name suggests. They post chat logs from Discord servers they believe are populated by far-Right extremists, who may or may not be at risk of committing acts of terrorism, to expose them.

The danger here is that it’s not clear when groups like Unicorn Riot are exposing credible threats of violence versus when they’re outing private communications they view as personally despicable. Most of us can agree that we don’t support racism, but I don’t know how many of us think all racists deserve to have their privacy invaded carte blanche just because a non-profit says so. At what point is it justified to have vigilantes spy on you? Where’s the line?

The lessons of the Buffalo shooting show that we still have a long way to go. But one thing’s for sure: trying to censor private conversations online is both a futile β€” and invasive β€” endeavour.


Katherine Dee is a writer. To read more of her work, visit defaultfriend.substack.com.

default_friend