AI-powered lies and manipulation constitute the gravest threat to humanity. At least this is the dystopian scenario espoused by the collective wisdom of 1,500 experts surveyed in the World Economic Forum’s 2024 Global Risks Report last week.
Unfortunately, such outbreaks of “elite panic” are a recurring phenomenon. Whenever the public sphere is expanded through new communications technology, the traditional gatekeepers fret about the dangers of allowing the general public — too fickle and unlearned — unmediated access to information.
As the WEF’s annual meeting in Davos begins this week, the fear of democratic institutions drowned by lies in a “tech-enabled Armageddon” supercharged by AI marks the third wave of elite panic in the digital age. The first wave was ushered in by the widespread belief that Russian disinformation campaigns on social media contributed decisively to Donald Trump’s 2016 election victory. The second wave was the so-called “infodemic” unleashed by Covid-19.
It is undeniable that lies, propaganda and conspiracy theories thrive online and can lead to real-world harms. But there are good reasons to take a deep breath and adjust the hands of the disinformation Doomsday Clock. For all the visibility of mis- and disinformation, several studies suggest that its share of overall online content is modest. What’s more, those most likely to fall for and share unhinged conspiracy theories constitute a relatively small group of political hyper-partisans with low trust in institutions and the media, who already live in warped realities. For instance, one study showed that a mere 12 people were behind 65% of online vaccine misinformation.
Quite often, it is the panic that causes more problems than the issue itself. Over the years, top-down measures used to “combat misinformation” have resulted in higher degrees of censorship, which can also be weaponised. The first two waves of elite panic demonstrate this danger vividly: between 2016 and 2022, 91 laws were passed to target false or misleading information around the globe, leading to the arrest of journalists and others who questioned official government policy.
France adopted a fake news law in 2018, and in 2022 the EU banned state-sponsored Russian media from being broadcast and even shared on social media. Both impeded efforts to document and debunk Russian propaganda. Meanwhile, in December 2023 the European Commission opened a legal investigation into X (previously Twitter), alleging shortcomings in the “effectiveness of measures taken to combat information manipulation on the platform” under its sweeping new Digital Services Act. This grants the Commission — a political body — powers to enact and enforce new speech rules affecting even legal content.
In the US the First Amendment prohibits such measures. But this didn’t deter the federal government from trying to stem the tide of false information about Covid-19, vaccines and the 2020 presidential election. In September 2023, a federal court ruled that government officials — including at the White House, the CDC and the FBI — had likely violated the First Amendment in “a coordinated campaign” to put pressure on social media platforms to remove constitutionally protected content (the decision is pending before the Supreme Court). It is not difficult to imagine how these precedents can be used to target other forms of information that governments might deem undesirable under the nebulous term of “disinformation”.
Elite panic doesn’t just have serious consequences for the ecosystem of free expression necessary for the pursuit of truth. The tendency to focus on the dangers of technology and its users ignores that much misinformation — not to mention outright lies — comes from the very politicians and governments who want coercive powers to define what is true or false. Optimising a rapidly evolving information environment for trust and reliability is a crucial task in the years ahead. Achieving this goal will be difficult; guided by elite panic, it might just be impossible.