Clare Melford is co-founder of the 'Global Disinformation Index' (Getty Images)


April 17, 2024   7 mins

ā€œOur team re-reviewed the domain, the rating will not change as it continues to have anti-LGBTQI+ narratives… The site authors have been called out for being anti-trans. Kathleen Stock is acknowledged as a ā€˜prominent gender-critical’ feminist.ā€

This was part of an email sent to UnHerd at the start of January from an organisation called the Global Disinformation Index. It was their justification, handed down after a series of requests, for placing UnHerd on a so-called ā€œdynamic exclusion listā€ of publications that supposedly promote ā€œdisinformationā€ and should therefore be boycotted by all advertisers.

They provided examples of the offending content: Kathleen Stock, whose columns are up for a National Press Award this week, Julie Bindel, a lifelong campaigner against violence against women, and Debbie Hayton, who is transgender. Apparently the GDI equates ā€œgender-criticalā€ beliefs, or maintaining that biological sex differences exist, with ā€œdisinformationā€ — despite the fact that those beliefs are specifically protected in British law and held by the majority of the population.

 

The verdicts of ā€œratings agenciesā€ such as the GDI, within the complex machinery that serves online ads, are a little-understood mechanism for controlling the media conversation. In UnHerd’s case, the GDI verdict means that we only received between 2% and 6% of the ad revenue normally expected for an audience of our size. Meanwhile, neatly demonstrating the arbitrariness and subjectivity of these judgements, Newsguard, a rival ratings agency, gives UnHerd a 92.5% trust rating, just ahead of the New York Times at 87.5%.

So, what are these ā€œratings agenciesā€ that could be the difference between life and death for a media company? How does their influence work? And who funds them? The answers are concerning and raise serious questions about the freedom of the press and the viability of a functioning democracy in the internet age.

***

Disinformation only really became a discussion point in response to the Trump victory in 2016, and was then supercharged during the Covid era: Google Trends data shows that worldwide searches for the term quadrupled between June and December 2016, and had increased by more than 30 times by 2022. In response to the supposed crisis, corporations, technology companies and governments all had to show they were taking some form of action. This created a marketplace for enterprising start-ups and not-for-profits to claim a specialism in detecting disinformation. Today, there are hundreds of organisations who make this claim, providing all sorts of ā€œfact-checkingā€ services, including powerful ratings agencies such as GDI and Newsguard. These companies act as invisible gatekeepers within the vast machinery of online advertising.

How this works is relatively straightforward: in UnHerd’s case, we contract with an advertising agency, which relies on a popular tech platform called ā€œGrapeshotā€, founded in the UK and since acquired by Larry Ellison’s Oracle, to automatically select appropriate websites for particular campaigns. Grapeshot in turn automatically uses the ā€œGlobal Disinformation Indexā€ to provide a feed of data about ā€œbrand safetyā€ — and if GDI gives a website a poor score, very few ads will be served.

ā€œThese companies act as invisible gatekeepers within the vast machinery of online advertising.ā€

The Global Disinformation Index was founded in the UK in 2018, with the stated objective of disrupting the business model of online disinformation by starving offending publications of funding. Alongside George Soros’s Open Society Foundation, the GDI receives money from the UK government (via the FCDO), the European Union, the German Foreign Office and a body called Disinfo Cloud, which was created and funded by the US State Department.

Perhaps unsurprisingly, its two founders emerged from the upper echelons of ā€œrespectableā€ society. First, there is Clare Melford, whose biography published by the World Economic Forum states that she had previously ā€œled the transition of the European Council on Foreign Relations from being part of George Soros’s Open Society Foundation to independent statusā€. She set up the GDI with Daniel Rogers, who worked ā€œin the US intelligence communityā€, before founding a company called ā€œTerbium Labsā€ that used AI and machine learning to scour the internet for illicit use of sensitive data and then sold it handsomely to Deloitte.

Together, they have spearheaded a carefully intellectualised definitional creep as to what counts as ā€œdisinformationā€. Back when it was first set up in 2018, they defined the term on their website as ā€œdeliberately false content, designed to deceiveā€. Within these strict parameters, you can see how it might have appeared useful to have dedicated fact-checkers identifying the most egregious offenders and calling them out. But they have since broadened the definition to encompass anything that deploys an ā€œadversarial narrativeā€ — stories that may be factually true, but pit people against each other by attacking an individual, an institution or ā€œthe scienceā€.

GDI founder Clare Melford explained in an interview at the LSE in 2021 how this expanded definition was more ā€œusefulā€, as it allowed them to go beyond fact-checking to targeting anything on the internet that they deem ā€œharmfulā€ or ā€œdivisiveā€:

ā€œA lot of disinformation is not just whether something is true or false — it escapes from the limits of fact-checking. Something can be factually accurate but still extremely harmful… [GDI] leads you to a more useful definition of disinformation… It’s not saying something is or is not disinformation, but it is saying that content on this site or this particular article is content that is anti-immigrant, content that is anti-women, content that is antisemiticā€¦ā€

Larger traffic websites are rated using humans, she explains, but most are rated using automated AI. ā€œWe actually instantiate our definition of disinformation — the adversarial narrative topics — within the technology,ā€ explains Melford. ā€œEach adversarial narrative is given its own machine-learning classifier, which then allows us to search for content that matches that narrative at scale… misogyny, Islamophobia, anti-Semitism, anti-black content, climate change denial, etc.ā€

Melford’s team and algorithm are essentially trained to identify and defund any content she finds offensive, not disinformation. Her personal bugbears are somewhat predictable: content supporting the January 6 ā€œinsurrectionsā€, the pernicious influence of ā€œwhite men in Silicon Valleyā€, and anything that might undermine the global response to the ā€œexistential challenge of climate changeā€.

The difficulty, however, is that most of these issues are highly contentious and require robust, uncensored discussion to find solutions. Challenges to scientific orthodoxy are particularly important, as the multiple failures of the official response to Covid-19 amply demonstrated. Indeed, one of the examples of GDI’s work that Melford highlighted in her LSE talk was an article about the Delta variant of Covid-19. ā€œThis is a Spanish language site talking about how a third of deaths in the United Kingdom from the Delta variant are amongst those people who are vaccinated, which is clearly untrue,ā€ says Melford. ā€œIt is Chipotle that has been caught next to this ad unwittingly, and unfortunately for them have funded this highly dangerous disinformation about vaccines.ā€

The statistic being reported comes from a June 2021 Public Health England reportĀ into Covid variants that sets out the 42 known deaths from the Delta variant from January to June: 23 were unvaccinated, 7 vaccinated with one shot and 12 fully vaccinated. In other words, 29% were fully vaccinated — around a third — and 17% partially vaccinated, making a total of 45% vaccinated. To further complicate the question, Melford misread the Spanish headline, and it actually referenced two thirds, or 66%, which is wrong again.

Examples like this are far from rare. The GDI still hosts an uncorrected 2020 blog about the ā€œevolution of the Wuhan lab conspiracy theoryā€ surrounding Covid-19’s origins, which concludes that ā€œcutting off ads to these fringe sites and their outer networks is the first action neededā€. This is despite the fact that Facebook and other tech companies long ago corrected similar policies and conceded that it was a legitimate hypothesis that should never have been censored.

***

In the US, a number of media organisations have started to take action against GDI’s partisan activism, prompted by a GDI report in 2022 that listed the 10 most dangerous sites in America. To many, it looked simply like a list of the country’s most-read conservative websites. It even included RealClearPolitics, a well-respected news aggregator whose polling numbers are among the most quoted in the country. The ā€œleast risk of disinformationā€ list was, predictably enough, populated by sites with a liberal inclination.

In recent months, a number of American websites have launched legal challenges against GDI’s labelling system, which they claim infringes upon their First Amendment rights. In December, The Daily Wire and The Federalist teamed up with the attorney general of Texas to sue the state department for funding GDI and Newsguard. A separate initiative to prevent the Defense Department from using any advertiser that uses Newsguard, GDI or similar entities has been successful, and is now part of federal law.

But GDI is aĀ British company and, on this side of the Atlantic, the Conservative Government continues to fund it. A written question from MP Philip Davies last year revealed that Ā£2.6 million was given in the period up to last year, and that there is still ā€œfrequent contactā€ between the GDI and the FCDO ā€œCounter Disinformation and Media Developmentā€ unit.

***

Yesterday, I was invited to give evidence to the House of Lords Communication and Digital Committee during which I outlined the extent of the threat to the free media of self-appointed ratings agencies such as the Global Disinformation Index. The reality, as I told Parliament, is that GDI is merely the tip of the iceberg. At a time when the news media is so distrusted and faces a near-broken business model, the role of government should be to prevent, not encourage, and most certainly not fund, consolidations of monopoly power around certain ideological viewpoints.

But this isn’t simply a matter for the media. Both companies and those in the advertising sector also need to act: it cannot be good marketing for brands to target only half the population. Last year, Oracle announced it was cutting ties with GDI on free speech grounds, but as we discovered, it seems they are still collaborating via the Grapeshot plaform: is Larry Ellison aware of this?

At its heart, the disinformation panic is becoming a textbook example of how a ā€œsolutionā€ can do more harm than the problem it is designed to address. Educated campaigners such as Clare Melford may think they are doing the world a service, but in fact they are acting as intensifying agents, lending legitimacy to a conspiratorial world view in which governments and corporations are in cahoots to censor political expression. Unless something is done to stop them, they will continue to sow paranoia and distrust — and hasten us towards an increasingly radicalised and divided society.

***

This article has been updated to reference the fact that Clare Melford misread the headline about deaths from the Delta variant, and it actually claimed 2/3 not 1/3.


Freddie Sayers is the Editor-in-Chief & CEO of UnHerd. He was previously Editor-in-Chief of YouGov, and founder of PoliticsHome.

freddiesayers