April 29, 2022 - 10:10am
Earlier this year, the latest National Terrorism Advisory System Bulletin stated that the United States remained in a ‘heightened threat environment’ fuelled by ‘an online environment filled with false or misleading narratives and conspiracy theories, and other forms of mis- dis- and mal-information (MDM) introduced and/or amplified by foreign and domestic threat actors.’ The memo goes on to say ‘these threat actors seek to exacerbate societal friction to sow discord and undermine public trust in government institutions to encourage unrest, which could potentially inspire acts of violence.’
It’s this threat of MDM in particular which inspired the Biden Administration’s new Disinformation Governance Board.
Ostensibly set up to combat misinformation in minority communities, this new Board is just as difficult to critique as it is to support. From language like ‘could potentially inspire acts of violence,’ to its guiding and too-nebulous concepts of misinformation, disinformation, and malinformation, the Board’s mission borders on satire. It’s clearly an online censorship cabal.
Like fascism, white supremacy, or even real terrorism, the vagueness of MDM feels purposeful. Anything can qualify in the right light, including the only recently acknowledged but long confirmed story about Hunter Biden’s laptop, or statistics about the dubious benefits of mask-wearing to prevent the spread of Covid-19.
Then there’s the proposed leader of the Board, the “disinformation expert” Nina Jankowicz. As a class, Jankowicz included, “disinformation experts” feel curiously similar to Bush-era Islamism “experts.” They come with impressively long lists of credentials that upon closer inspection seem to amount to little more than: “I’ve spent years promoting the pre-packaged, state-sponsored narratives about how X is a threat to democracy.”
Just take a look at Jankowicz’s publications tab on her own website: it’s a list of articles for the websites like Buzzfeed, with headlines like, “Facebook Groups Are Destroying America,” and “Facebook is undermining democracy.”
The board is, unsurprisingly, already under attack by Republicans, who have compared it to 1984’s Ministry of Truth.
Others on social media have compared it to China’s Great Firewall, suggesting that it may be a forebear to similar policies. In his book TikTok Boom, reporter Chris Stokel-Walker states that since 1996, the Chinese government has seen the Internet as not only a threat to its national security, but to its social stability.
“The boundaries between reality and virtuality are becoming more ambiguous,” a 2017 memo from China’s cybersecurity administration noted, “Cybersecurity is not only related to the security of our country and our society, but more importantly is related to the personal interests of every netizen.”
That is, it’s not enough that Chinese citizens aren’t influenced politically or made the victims of cyberattacks, they must also be shielded from pernicious and potentially corrupting social trends.
These comparisons don’t feel dramatic. The crusade to censor the American Internet has, thus far, been a clumsy operation at best.
On the one hand, The New York Post lost access to its Twitter account for publishing a newsworthy (and truthful) story that threatened the Biden administration. Amazon Web Services, the largest cloud provider, meanwhile, banned the Right-wing social media app Parler in 2021. On the consumer-side, users on social media platforms like Twitter and Instagram feel like they’re subjected to politically motivated suspensions and shadow bans.
But equally, social media accounts that post violent threats, some of them credible, regularly evade punishment. I’ve experienced it myself. This trend fuels much of the Left-wing complaints about online harassment and abuse, including those put forth by Jankowicz.
With all that in mind, I can’t help but wonder if this inconsistency is part of some master plan to sustain calls for tighter speech regulations. Censor dissenters and irregularly punish what’s obviously objectionable, like targeted threats of violence.
This is where Joe Biden’s Disinformation Board fits in; there will always be appeals to one’s sense of safety, pointing to clear cut cases of harassment as proof of why we need more regulation. The digital noose is tightening, and now the state will have an even tighter grip on our online lives.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeWords such as ‘disinformation’ and ‘misinformation’ are clearly creepy. What ever happened to ‘that’s wrong’ or ‘your statement is incorrect.’ This use of jargon is annoying.
My comment on ‘Is Ukraine Just A White Man’s War’ disappeared. It wasn’t ‘held for moderation’ and I wasn’t alerted to this. Nothing in my statement was against the community guidelines. I’m beginning to think Unherd is ashamed of their original readers.
As you say the whole US proposal and concept is creepy and sinister.
Regarding your experience here I find after an hour or so my submissions tend to come out of moderation or the somewhat absurdly entitled “Awaiting for approval” system – as if Unherd somehow approves of the random comments we submit rather than just suppressing the one’s they find objectionable for some reason.
It is interesting to hear that your comment was vaporised since I don’t regard your comments as particularly inflammatory and I noticed yesterday Martin Bollis posted something that never came through moderation. I think you are right that Unherd do seem ashamed of their original readers and seem to have run some of them off. I don’t see anything from John Jones and Galeti T and from some others who used to post regularly. Moderation certainly seems to have become more heavy handed than in the early days.
As I have frequently said an article from someone in the editorial team on the difficulties they face in running a moderation system would be interesting and directly pertinent to us as unpaid contributors. I not infrequently give more regard to the comments than to the original article where standards can sometimes dip in terms of quality.
Most of the ‘original readers’ have sadly, long gone.
So curious that we hardly heard of MDM prior to learning that Hillary lost because of Russian adverts. Nearly all political adverts contain half-truths as par for the course as we all understand. Various parties including nations sponsor and create psychological campaigns intended to divide and isolate segments of society. Nothing new here at all. Bots arrive to deliver these messages and are hard to police. The best any government agency can do is provide education. They can create their own propaganda to counter other’s campaigns but if not creatively done just look like heavy hands of annoyance. Stopping such messaging is an impossible task.
Not sure what the Biden administration thinks they can do with this new board. By creating it they now have a point person that Congress can attack no matter what the board tries to accomplish. Might create new job opening for the lawyers in court cases that will surely follow.
Biden should be packed off to a retirement home in Ballina, Co Mayo, ASAP.
DGB will save us from wrongthink.