In Alex Garland’s recent sci-fi TV series Devs, Silicon Valley engineers have built a quantum computer that they think proves determinism. It allows them to know the position of all the particles in the universe at any given point, and from there, project backwards and forwards in time, seeing into the past and making pinpoint-accurate forecasts about the future.
Garland’s protagonist, Lily Chan, isn’t impressed. “They’re having a tech nerd’s wettest dream,” she says at one point. “The one that reduces everything to nothing — nothing but code”. To them, “everything is unpackable and packable; reverse-engineerable; predictable”.
It would be a spoiler to tell you how it all ends up, but Chan is hardly alone in criticising the sometimes-Messianic pronouncements of tech gurus. Indeed, her lines might as well have been written by the entrepreneur and business writer Margaret Heffernan, whose book Uncharted provides a robust critique of what she calls our “addiction to prediction”.
Our fervent desire to know and chart the future — and our exaggerated view of our ability to do so — forces us, she says, into a straitjacket whenever some authoritative-sounding source makes a prediction: the future’s laid out, we know what’ll happen — it’s been forecast. Only by kicking this habit, she argues, “do we stop being spectators and become creative participants in our own future”.
That’s something of a lofty goal, but as we’ll see, the consequences of misunderstanding predictions can be far more immediate. In pandemics, it can end up killing thousands of people.
Heffernan does get to pandemic disease in the latter part of her book, but before that, she provides some cautionary tales that are useful to readers way beyond her targeted “business book” audience. Take, for instance, the 2013 prediction by researchers at the Oxford Martin School that “by 2035, 35% of jobs will have been taken by machines”. As Heffernan notes, this was an impossibly specific quantity: exactly this number of years in the future, exactly this percentage of jobs will be done by robots. When you think about it, such specificity is absurd, but it didn’t half grab the media’s attention, playing on people’s quite reasonable fears about the coming age of automation. The resulting media discussion, Heffernan says, “projected inevitability onto what was no more than a hypothesis”.
There are subtler manifestations of the prediction addiction. In science, for example, researchers — and I include myself in this — often deploy the word “predict” in a way that doesn’t comport with its everyday usage. Variable X predicts variable Y, they say, even though both were measured at exactly the same time. What they mean is that, if you didn’t know anything about Y, you would have some information about it if you knew X. But this “prediction” can be very weak: usually just “a bit better than chance” rather than “with a strong degree of accuracy”. By the time this translates to the public, often via hyped press releases, it’s frequently been imbued with a great deal more certainty than is warranted by the data.
We can see why this is, of course: science should be about predicting the world, the better to help us change or improve it. But the sheer prevalence of the p-word, often used in weaselly ways to boost the perceived importance of one’s research findings, is evidence that Heffernan is on the right track. The incentives push scientists towards making pronouncements about predictions, even if that’s not what any normal person would call their results.
As well as sins of specificity, Heffernan also critiques an opposite tendency: towards over-generalising. Mere labels are surprisingly powerful: the so-called jingle fallacy is where we assume that two things are similar just because they have the same name (as opposed to the jangle fallacy, where two similar things are assumed to be different because they have different names). Heffernan argues that the British military committed this fallacy by over-generalising lessons they’d learned during the insurgency in Northern Ireland in their predictions about the course of the very different insurgency in Iraq.
But Heffernan overeggs things by stating baldly at one point that “the future is unknowable”. For sure, we’re rather far off the Devs quantum-computer level, but some degree of prediction is still possible — not to mention highly desirable (think about the prediction of devastating health complaints like heart-attacks, for example, or the prediction of oil prices as the world economy fluctuates).
Heffernan does seem to agree with this, because she gives advice on how to improve predictions without falling prey to the sort of faux-specific pseudoscience or misleading generalisations we’ve just encountered. Her formula is essentially the following: use humility. After all, it’s the over-certainty in our predictive abilities that’s the real problem she’s addressing. She specifically praises the approach taken by psychologist Philip Tetlock and his Good Judgement Project, where “superforecasters” make predictions in terms of probabilities that encourage them to consider uncertainty and, more importantly, allow them to be held to account, judging them for their accuracy with a so-called Brier score months or years after they make their forecast.
All of which brings us to a topic where the clamour for predictions is like nothing we’ve seen before: Covid-19. Heffernan’s book was written before the first glimmers of the epidemic, but nonetheless she comes out looking wise and, appropriately enough, makes some eerily good predictions. At one point she interviews the chief executive of the Coalition for Epidemic Preparedness Innovation, which funds research into new vaccines. “[We] feel”, he told Heffernan, well before the end of 2019, “like the world has put us on notice that we have to deal with beta corona viruses… because they have pandemic potential.” Brrr.
Heffernan was also spot-on to praise the superforecasters, at least one of whom, Tom Liptay, has been able to out-predict a panel of 30 disease experts (that is, he has achieved better Brier scores) on the Covid-19 case and death numbers that are rolling in as the coronavirus sweeps through the US. Think about that for a second: if the US government had relied on this one superforecaster, instead of experts from Harvard and Johns Hopkins, they’d have had an overall better idea of how the virus would spread. It’s hard to say what the secret is, but his long experience of calibrating predictions and having them harshly tested against reality — in the way Heffernan recommends — can’t have hurt Liptay’s skills.
It could be argued that “prediction addiction” during the pandemic has cost us many thousands of lives. What else can we call the UK Government’s stated belief that there was a precise best time for the implementation of specific lockdown policies — a belief that caused it to delay the full lockdown to avoid public “fatigue,” thus allowing the coronavirus to run rampant for at least nine days — but an over-certain prediction?
The false certainty about our ability to predict something as complex as human behaviour — at least in a world that doesn’t contain science-fiction devices like the Devs machine — certainly now looks tragic. And writing as a psychology researcher, it seems even more drastically misconceived. It’s simply impossible to read the research that’s published in behavioural-science journals — generally small-scale, unrepresentative, and uncertain — and think that we’re in a place to say ‘science tells us that right now is the moment to advise against public gatherings, and in one week’s time, it’ll be right to do the same for workplaces…’ and so on.
Aggressive action appears to have been required, but the UK Government’s belief that it could accurately predict society’s response to an unknown and terrifying virus seems, at least in part, to have held them back from it. If Heffernan’s overall message of “embrace uncertainty” seems at all trite, one only needs to look at our current predicament to see how badly it was needed.