Does Mark Zuckerberg decide the truth? (Photo by Chip Somodevilla/Getty Images)

Facebook issued two statements in the past week relating to its treatment of âmisinformationâ â and they couldnât have been more different.
The first was a single paragraph updating their policy on stories speculating that Covid-19 is a man-made virus â after almost every major media outlet, and yesterday even the British and American security services, finally confirmed that it is a feasible possibility.
âIn light of ongoing investigations into the origin of COVID-19 and in consultation with public health experts,â a Facebook spokesman said, âwe will no longer remove the claim that COVID-19 is man-made or manufactured from our apps.â
In other words, Facebook now believes that its censorship of millions of posts in the preceding months had been in error. There was, of course, no hint of apology in its most recent statment; though its tone proved quite the contrast to Facebookâs boast last year that, in April alone, it displayed âwarningsâ on 50 million âpieces of content related to Covid-19â. That was just the start; in February this year, Facebook even placed a warning on a piece for UnHerd by Ian Birrell, an award-winning investigative reporter who has been writing about the origins of Covid-19 since the start of the pandemic.
âWhen people saw those warning labels, 95% of the time they did not go on to view the original content,â the company says. Moreover, if an article is rated âfalseâ by their âfact checkersâ, the network will âreduce its distributionâ. This means that, while an author or poster is not aware that censorship is taking place, the network could be hiding their content so it is not widely disseminated.
The second announcement â released on the same day â was that Facebook is now extending its policy of âshadow-banningâ accounts that promote misinformation. âStarting today, we will reduce the distribution of all posts in News Feed from an individualâs Facebook account if they repeatedly share content that has been rated by one of our fact-checking partners.â So now, if you share something deemed to contain misinformation multiple times, your account could be silenced; you wonât be informed, you wonât know to what degree your content will be hidden and you wonât know how long it will last â all thanks to group of âfact-checkersâ whose authority cannot be questioned.
The fact that this announcement was made on the very same day as Facebookâs admission of error shows how unaccountable these global superpowers are, as well as the extent to which they can act as they please without fear of repercussion. Indeed, itâs hardly surprising that they have increasingly adopted the paraphernalia of governments: Facebookâs âOversight Boardâ includes ex-politicians (who it appoints), has its own constitution and passes down âbindingâ judgements on the company.
Yet imagine if a similar error had been made by a democratic government. There would be consequences; a public inquiry, perhaps, as well as demands for a change in policy and for people to resign. But Facebook â the sixth largest company in the world, whose apps are a source of information for 3.45 billion people, over half the worldâs adults â doesnât simply continue with its programme of cleansing âmisinformationâ; it doubles down on it.
Clearly, the moderation of social media posts is not a straightforward problem to solve. I donât favour the move to reclassify social networks as publishers â responsible in the same way as a newspaper for all the content they publish â because it would clearly incentivise more risk-aversion and censorship to avoid lawsuits. On the other hand, nor am I a free speech fundamentalist; it seems reasonable that, for example, posts directly inciting violence should be removed, while there is also case for outlandish medical quackery to also be restricted.
But the trend towards removing and shadow-banning content on still-developing controversies on the grounds of official untruth is censorship of a different order. In the realms of science and politics, the âtruthâ is always evolving. It is an epistemological fantasy to assume that it can be determined using censorship rather than inquiry.
And so it should concern us all that the full list of claims related to Covid-19 that are still being censored remains alarmingly extensive and definitive. Any âclaims that downplay the severity of Covid-19â are subject to censorship, including any suggestion âthat the mortality rate is the same or lower than seasonal influenzaâ. Does this mean that this Politifact article, which concludes that influenza is in many cases more deadly for teens, would be censored? Similarly, âclaims that Covid-19 cannot be transmitted in certain climates or weather conditionsâ are banned, putting an early end to the ongoing scientific debate around the seasonality of the virus.
Yet this climate of censorship was, in many ways, inevitable; what else are we to expect from a global corporation that has the power to determine what are âfactsâ, and then police them into existence? As for the murky practice of shadow-banning, it seems inevitable that it will only add to the atmosphere of conspiracy and mistrust surrounding debates over Covid-19.
Big Tech got its âmisinformationâ policy wrong on the lab leak hypothesis â and, if its behaviour in the past week is any indication, it hasnât learnt its lesson.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe