April 11, 2025 - 10:00am

Britain is developing its own Minority Report-style crime prediction tool, according to investigators from privacy campaign group Statewatch. The project, commissioned by the Prime Minister’s Office under Rishi Sunak, collates demographic, health, and crime data to “review offender characteristics that increase the risk of committing homicide”.

Campaigners have called the project “chilling and dystopian”, while the Guardian worried that “the data used would build bias into the predictions against minority-ethnic and poor people”. But this use of “bias” in turn exposes the conflicting political constraints on policing in modern Britain: a predicament which points to the role our post-liberal governing technocracy perhaps hopes Big Data will play, in helping them avoid confronting such conflicts directly.

“Bias” in the statistical sense describes the many ways a dataset can be skewed or inaccurate, resulting in false inferences. The problem with data bias, from a statistician’s perspective, is the risk of drawing inaccurate conclusions. By contrast, in the sense employed by the Guardian, “bias” denotes any generalisation from data that might negatively single out a marginalised group. So a tool which suggested minority-ethnic or poor people were more likely to commit homicide would be guilty of “bias” — even if these predictions were based on accurate data produced using methodologically sound statistical analysis.

The potential for conflict between these competing types of “bias” should be obvious. In the context of existing UK homicide data, it is very clear indeed. For we don’t need a big Government database project to show that disparities exist — along lines which potentially set statisticians on a collision course with social justice campaigners. The most recent ONS homicide data release shows that 92% of those convicted of homicide were male, while 40% were aged 16-24. The group most overrepresented — by a factor of five, relative to their proportion of the overall population — was young black men. Victims are also disproportionately likely to be young men, with young black men especially at risk.

The difficulty is that policy in this area pulls in two mutually exclusive directions. On the one hand, everybody wants safe streets and a low murder rate. But on the other, everyone also wants fair treatment regardless of background. These two imperatives drive conflicting pressures both toward policing approaches which focus on at-risk demographics, and also against policing approaches that give an impression of unfairly profiling minorities.

Indeed, we might observe that the Metropolitan Police was already using a data-driven approach to homicide “risk assessment” and prevention back in 2014. The policy was “stop and search”, which was restricted in 2021 following protests over the pervasive impression it produced among those targeted, of unfair and race-inflected profiling and police harassment. In its application, and the resulting protests, we can see the two competing meanings of “bias” at loggerheads.

Reasonable people can of course disagree on whether stop and search actually worked. In principle, in a statistical sense, there is nothing “biased” about targeting preventive policing measures wherever a given crime is most prevalent. But considering the demographic specifics in this case, in the Guardian sense of “bias” this targeting was transparently, outrageously biased. Taken together, these competing imperatives put the Met in an impossible situation. We might wonder, then, whether the point of the “risk assessment” data tool commissioned under Sunak was not actually to provide police with new insights about homicide probabilities but to resolve these mutually exclusive pressures, by handing the probabilistic assessment to a supposedly “neutral” digital third party.

Sunak, a technocrat by disposition, perhaps hoped the public would perceive such a tool to be, as a non-human, less at risk of taint by the increasingly dour and partisan identity politics which have come to characterise Britain. If so, this project may be only the first of a new wave of political attempts at “neutral” buck-passing. That is, efforts to de-politicise intractably political issues, as a previous generation attempted with quangos, just this time using AI. And the response from Statewatch and the Guardian should give an early indicator of how well that’s going to work — which is to say, not at all.

Perhaps, eventually, Britain’s leaders will learn the hard way that political problems require political solutions. Unhappily, it seems we are still nowhere near that moment of realisation yet.


Mary Harrington is a contributing editor at UnHerd.

moveincircles