My thread was hidden with no explanation
On Tuesday, I wrote a long thread on X (formerly Twitter) based on my most recent UnHerd article on the Israeli-Palestinian peace process. Almost immediately, however, other users started replying to the thread, telling me that they couldn’t see it. Only the first post was visible, with the rest simply marked by the text: “This Post is unavailable.”
Some users also told me that even if they shared the thread it wouldn’t appear on their profile. I also soon realised that other users were not being notified about my replies to their comments because I never got an answer. I immediately knew what was happening: the thread — or, more likely, my entire account — had been shadowbanned.
Shadowbanning (or “visibility filtering”) is a particularly disturbing form of online censorship in which a user’s content will be limited in several ways — by limiting the number of people who can see it, hiding it or reducing its prominence in search results, or censoring threads and replies — without it being readily apparent to the user, or easily provable.
The blotting out of an entire thread is a rather overt form of censorship, but other forms of visibility limitation are much more subtle, and often the only sign that something is off is a sudden drop in interactions. Since the whole point of shadowbanning is that, officially, it isn’t even taking place, the user has no way of appealing the decision — or even of knowing why it’s happening in the first place.
For a long time, Twitter denied that shadowbanning even existed; only once Elon Musk took over the company and started releasing internal documents via the Twitter Files did it become apparent that not only did shadowbanning exist, but that it had been used on a massive scale to deamplify or deboost content that didn’t chime with official narratives, especially relating to Covid-19, often under direct pressure from the US government. After taking over the company, in late 2022, Musk immediately vowed to put an end to shadowbanning, saying that users have the right to know if their account is limited in any way.
However, he has since admitted that addressing the issue is proving much more difficult than expected. In June, he said that shadowbanning is buried so deep in the Twitter code that shadowbans are often triggered automatically by the algorithm itself — for example based on how many times an account is reported — and that “it often takes us hours to figure out who, how and why an account was suspended or shadowbanned.”
So what is going on? Was I, like others, simply the victim of a ghost in the X/Twitter machine — an out-of-control algorithm that shadowbans users for reasons inscrutable even to the company techs? Or is there more at play here? After all, since the start of Israel’s attack on Gaza, hundreds of users have accused major social media platforms — Instagram, Facebook, YouTube, TikTok and even X — of censoring pro-Palestine content, mostly in the form of shadowbanning. Ironically, this happened just a few days after I took part in the launch of an international anti-censorship appeal known as the Westminster Declaration.
Twitter’s automated response to my demand for explanations certainly seemed to suggest I’d been deliberately censored: “Sometimes, we will take action on an account or post(s) based on behaviors that create a negative impact on X.” Ultimately, though, there’s really no way of knowing — and that’s the crux of the problem. Even if the censorship is political, who is driving it? The companies themselves, the advertisers, the governments, “fact-checkers”, the Anti-Defamation League? We simply don’t know.
The opacity of the whole process is the most disturbing part. Traditional censorship, usually at the hands of governments, is bad enough, but postmodern online censorship — where you don’t know who is censoring you or why, and you are often gaslit into believing you’re imagining it all — is truly dystopian.
In a final Kafkaesque twist, last night the shadowban on my thread was lifted. No reason was given.