How did the system let him through? (TOLGA AKMEN/AFP via Getty Images)


October 19, 2021   6 mins

There was a feeling of inevitability surrounding reports that Ali Harbi Ali, the man alleged to have murdered Sir David Amess MP, was referred to the Prevent programme some years ago.

Prevent is a much-mocked government counter-terrorism strategy, which looks out for warning signs that someone is at risk of being radicalised into some form of violent extremism. About 6,000 people are referred to it each year. 

Every high-profile murder or act of terrorism appears to be followed by a story like this. The Manchester Arena suicide bomber was known to the security services. Lee Rigby’s killers were known to the security services. At least one of the London Bridge attackers was known to the security services. It’s not just in cases of terrorism: the murderer of Alice Gross had a criminal record for sexual assault. Sarah Everard’s killer had a history of indecent exposure.

There is a further sense of inevitability over the fact that, three days after Sir David’s murder, Priti Patel, the home secretary, declared that Prevent is “under review” to ensure that it is “fit for purpose”. This is, in fact, an ongoing review that was launched in 2019 and due to report in December, rather than a response to the atrocity. Yet the announcement was clearly intended to suggest that the murder represented a failure on the part of the surveillance and deradicalisation system.

But is it? The sense of inevitability is real: these stories crop up a lot. But the truth is they will continue to keep cropping up, almost every time there is a terror attack, and almost no matter what we do with Prevent or whatever its successor is. Prevent may be failing — though I’ll happily defer judgment until the publication of the review — but even if it were successful, when terrorist attacks happen, it is likely that the perpetrators would be on its books, or known to the security services.

And more than that: as callous as it sounds, that may be for the best. To give a sense of why that is, it’s helpful to look at another system for spotting future risks: cancer screening.

You screen for prostate cancer with a blood test, which looks for levels of something called “prostate-specific antigen”, or PSA. If you’re between 50 and 69 years old, the “normal” level of PSA is given as below three nanograms per millilitre (ng/ml). If your test comes back higher than that, you are flagged as at risk of having prostate cancer.

But according to the NHS, about 15% of men who do have prostate cancer actually have “normal” levels of PSA: that is, below 3ng/ml. If you run a screening test like this, you’ll falsely reassure about one cancer sufferer in six that they are cancer-free.

There’s a straightforward solution to that, of course. Instead of putting your cut-off at 3ng/ml, you could make it 2.5ng/ml. Then you’d miss fewer cancers.

You can probably guess the outcome, though. If you move the cut-off lower, you’ll miss fewer real cancers, but you’ll scare more men unnecessarily. If you move the cut-off higher, you’ll get fewer false positives — you won’t tell as many men that they have cancer when they don’t — but you’ll get more false negatives: you’ll miss more real cancers.

You might think that it’s a pretty straightforward decision. Telling people that they have cancer when they don’t is inconvenient and alarming; telling people that they don’t have cancer when they do might kill them. So you err on the side of caution.

But that’s not how it works. False positives on cancer screenings can literally kill. People can end up having unnecessary surgeries, X-rays, chemo or radiotherapy. Complications are sufficiently common that the NHS says that the benefits of prostate cancer screening do not outweigh the risks.

The cost of getting it wrong is real with terrorism, too. The process is somewhat different, though. When it comes to the risk of someone being radicalised, unlike a blood test, there isn’t a number that you can read off. That doesn’t mean there can’t be one: it could be that the people who are passed on to Prevent are given a quasi-objective score. We do exactly that for, say, suicidality risk or autism or happiness, or any one of a thousand psychological functions. You tick boxes on a questionnaire about someone’s isolation, their anger, their ideologies, and if the score on the questionnaire adds up to more than 40 or whatever you declare them a terrorism risk. That is precisely what goes on, in a less obvious and open way, with things like AI parole decisions

But as it happens, there isn’t an explicit number. The Prevent guidance says: “There is no fixed profile of a terrorist, so there is no defined threshold to determine whether an individual is at risk of being drawn into terrorism.” So there’s no nice straightforward “Terrorism risk: 13.6” readout.

Nonetheless, the same process is going on. A person comes into contact with the counterterrorism services. Of the 6,000 or so who are referred to Prevent each year, about 500 or so are deemed “vulnerable” to radicalisation and are passed on to a subgroup called “Channel”. 

Since referral to Channel is not based on an objective score, it’s based on a subjective feeling (or “expert judgment”): people will be flagged up if the panel judging them feels sufficiently strongly that they’re a risk. They might not explicitly say “Terrorism risk: 13.6”, but still, there is a threshold of risk, over which someone is considered a threat.

And just like the PSA levels in the blood, that assessment will be imperfect. If you read some young man saying something disturbing online, is it harmless anger and braggadocio, or are they a terrorist? You can raise your implicit threshold and avoid harassing innocent people, at the cost of an increased risk of missing a genuine terrorist, or you can lower it and correctly identify more terrorists, but at the cost of labelling a lot of harmless people as potential terrorists.

It might seem, like the cancer test in reverse, that there’s an asymmetry here: a false positive annoys people; a false negative kills people. But also like the cancer test, it’s not as simple as that. The false positives will be a lot more common than the false negatives — simply put, there are more mouthy non-terrorists than actual terrorists in the world. As I said before, there were 6,000 people referred to Prevent every year, and 500 or so were judged to be of sufficiently high risk to be passed on to Channel. But there have been only four actual terror attacks in the last two years. 

If you lower your threshold, raise the alarm on more borderline cases, then you will waste more police time, put more innocent people under needless scrutiny and stigmatise more communities. (It seems inevitable that a lowered threshold will mean more young Muslim men, in particular, being picked up by the security services.) That may be a price worth paying — but, let’s be clear, it will be a price that you pay. If you take in everybody suspicious for questioning — every misogynist loser on incel subreddits, every angry racist or radical Islamist on dark-web chatrooms — you may prevent one or two more attacks, but you will undoubtedly fill up your jails and enrage the populace.

This doesn’t mean there’s nothing you can do. Moving your threshold is zero-sum. But you can change your test: if instead of testing for PSA you looked for some other marker, you might be able to tell whether someone had cancer with more accuracy. In the case of counter-terrorism, you could do things like increasing police funding for surveillance, rather than simply being more strict about your criteria — although of course that would mean less money elsewhere.

An alternative suggestion might be to abandon “expert judgment” and introduce something like I talked about above, an explicit algorithm: human judgment is famously terrible at predicting complex things like the likelihood of a criminal to reoffend, and algorithms consistently outperform us, as the psychologist Paul Meehl demonstrated way back in 1954. They beat humans at predicting the price of wine, how long a cancer patient will live, who will win a football match, how likely a business is to succeed and dozens of other subjects. It is likely that some fairly simple algorithm could do significantly better than the best experts at predicting who is a terror risk, as well.

But it could only ever be a partial improvement. Humans are stubbornly hard to predict. Part of the reason why algorithms can outperform human judgment in those fields is because human judgment is consistently terrible: we are very often wrong about who will reoffend, who will live and die, who will win a football match. It is not that algorithmic prediction is great; the future is still hard to know. However good we make our systems for detecting terrorists, they will never be very good. So terrible things will always happen, and when they do, we will assume our systems are too lax and need to be tightened.

It is tempting to think like that in the wake of an atrocity such as David Amess’s murder: to think that we ought to lower our thresholds of what counts as a risk. Perhaps it’s even true. But just as it is an unavoidable fact of reality that reducing false alarms means missing more real ones, a world in which Sir David’s murderer was caught ahead of time could be grimly authoritarian.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers