X Close

Our dangerous addiction to prediction When it comes to forecasting, what we need more than anything is humility

Will our future be dystopian? Credit: DANIEL LEAL-OLIVAS/AFP via Getty Images

Will our future be dystopian? Credit: DANIEL LEAL-OLIVAS/AFP via Getty Images


May 27, 2020   5 mins

In Alex Garland’s recent sci-fi TV series Devs, Silicon Valley engineers have built a quantum computer that they think proves determinism. It allows them to know the position of all the particles in the universe at any given point, and from there, project backwards and forwards in time, seeing into the past and making pinpoint-accurate forecasts about the future.

Garland’s protagonist, Lily Chan, isn’t impressed. “They’re having a tech nerd’s wettest dream,” she says at one point. “The one that reduces everything to nothing — nothing but code”. To them, “everything is unpackable and packable; reverse-engineerable; predictable”.

It would be a spoiler to tell you how it all ends up, but Chan is hardly alone in criticising the sometimes-Messianic pronouncements of tech gurus. Indeed, her lines might as well have been written by the entrepreneur and business writer Margaret Heffernan, whose book Uncharted provides a robust critique of what she calls our “addiction to prediction”.

Our fervent desire to know and chart the future — and our exaggerated view of our ability to do so — forces us, she says, into a straitjacket whenever some authoritative-sounding source makes a prediction: the future’s laid out, we know what’ll happen — it’s been forecast. Only by kicking this habit, she argues, “do we stop being spectators and become creative participants in our own future”.

That’s something of a lofty goal, but as we’ll see, the consequences of misunderstanding predictions can be far more immediate. In pandemics, it can end up killing thousands of people.

Heffernan does get to pandemic disease in the latter part of her book, but before that, she provides some cautionary tales that are useful to readers way beyond her targeted “business book” audience. Take, for instance, the 2013 prediction by researchers at the Oxford Martin School that “by 2035, 35% of jobs will have been taken by machines”. As Heffernan notes, this was an impossibly specific quantity: exactly this number of years in the future, exactly this percentage of jobs will be done by robots. When you think about it, such specificity is absurd, but it didn’t half grab the media’s attention, playing on people’s quite reasonable fears about the coming age of automation. The resulting media discussion, Heffernan says, “projected inevitability onto what was no more than a hypothesis”.

There are subtler manifestations of the prediction addiction. In science, for example, researchers — and I include myself in this — often deploy the word “predict” in a way that doesn’t comport with its everyday usage. Variable X predicts variable Y, they say, even though both were measured at exactly the same time. What they mean is that, if you didn’t know anything about Y, you would have some information about it if you knew X. But this “prediction” can be very weak: usually just “a bit better than chance” rather than “with a strong degree of accuracy”. By the time this translates to the public, often via hyped press releases, it’s frequently been imbued with a great deal more certainty than is warranted by the data.

We can see why this is, of course: science should be about predicting the world, the better to help us change or improve it. But the sheer prevalence of the p-word, often used in weaselly ways to boost the perceived importance of one’s research findings, is evidence that Heffernan is on the right track. The incentives push scientists towards making pronouncements about predictions, even if that’s not what any normal person would call their results.

As well as sins of specificity, Heffernan also critiques an opposite tendency: towards over-generalising. Mere labels are surprisingly powerful: the so-called jingle fallacy is where we assume that two things are similar just because they have the same name (as opposed to the jangle fallacy, where two similar things are assumed to be different because they have different names). Heffernan argues that the British military committed this fallacy by over-generalising lessons they’d learned during the insurgency in Northern Ireland in their predictions about the course of the very different insurgency in Iraq.

But Heffernan overeggs things by stating baldly at one point that “the future is unknowable”. For sure, we’re rather far off the Devs quantum-computer level, but some degree of prediction is still possible — not to mention highly desirable (think about the prediction of devastating health complaints like heart-attacks, for example, or the prediction of oil prices as the world economy fluctuates).

Heffernan does seem to agree with this, because she gives advice on how to improve predictions without falling prey to the sort of faux-specific pseudoscience or misleading generalisations we’ve just encountered. Her formula is essentially the following: use humility. After all, it’s the over-certainty in our predictive abilities that’s the real problem she’s addressing. She specifically praises the approach taken by psychologist Philip Tetlock and his Good Judgement Project, where “superforecasters” make predictions in terms of probabilities that encourage them to consider uncertainty and, more importantly, allow them to be held to account, judging them for their accuracy with a so-called Brier score months or years after they make their forecast.

All of which brings us to a topic where the clamour for predictions is like nothing we’ve seen before: Covid-19. Heffernan’s book was written before the first glimmers of the epidemic, but nonetheless she comes out looking wise and, appropriately enough, makes some eerily good predictions. At one point she interviews the chief executive of the Coalition for Epidemic Preparedness Innovation, which funds research into new vaccines. “[We] feel”, he told Heffernan, well before the end of 2019, “like the world has put us on notice that we have to deal with beta corona viruses… because they have pandemic potential.” Brrr.

Heffernan was also spot-on to praise the superforecasters, at least one of whom, Tom Liptay, has been able to out-predict a panel of 30 disease experts (that is, he has achieved better Brier scores) on the Covid-19 case and death numbers that are rolling in as the coronavirus sweeps through the US. Think about that for a second: if the US government had relied on this one superforecaster, instead of experts from Harvard and Johns Hopkins, they’d have had an overall better idea of how the virus would spread. It’s hard to say what the secret is, but his long experience of calibrating predictions and having them harshly tested against reality — in the way Heffernan recommends — can’t have hurt Liptay’s skills.

It could be argued that “prediction addiction” during the pandemic has cost us many thousands of lives. What else can we call the UK Government’s stated belief that there was a precise best time for the implementation of specific lockdown policies — a belief that caused it to delay the full lockdown to avoid public “fatigue,” thus allowing the coronavirus to run rampant for at least nine days — but an over-certain prediction?

The false certainty about our ability to predict something as complex as human behaviour — at least in a world that doesn’t contain science-fiction devices like the Devs machine — certainly now looks tragic. And writing as a psychology researcher, it seems even more drastically misconceived. It’s simply impossible to read the research that’s published in behavioural-science journals — generally small-scale, unrepresentative, and uncertain — and think that we’re in a place to say ‘science tells us that right now is the moment to advise against public gatherings, and in one week’s time, it’ll be right to do the same for workplaces…’ and so on.

Aggressive action appears to have been required, but the UK Government’s belief that it could accurately predict society’s response to an unknown and terrifying virus seems, at least in part, to have held them back from it. If Heffernan’s overall message of “embrace uncertainty” seems at all trite, one only needs to look at our current predicament to see how badly it was needed.


Stuart Ritchie is a psychologist and a Lecturer in the Social, Genetic and Developmental Psychiatry Centre at King’s College London

StuartJRitchie

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

7 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Dave Coulson
Dave Coulson
3 years ago

An ok article, until you contradicted yourself, and started down the rather stupid path of “predicting” what would have happened (cost 1000’s of lives!!!) IF we had done things differently.

Exactly how did you reach that conclusion?? Did you use a model? A guess? An inappropriate comparison to a different country? The phrasing is such that it sounds like a given, rather than the complete uninformed guess that it is.

You do not seem to get your own argument that predictions are difficult and uncertain and should only be used with care. Or did you just want to criticise government policy and back solved for an article to put in front of the criticism so you look less biased?

Dougie Undersub
Dougie Undersub
3 years ago

I’d certainly agree that we need to treat predictions with caution and recognise them for what they are. Economic forecasts are notoriously inaccurate and yet, whenever the OBR comes out with a new one, it gives rise to a noisy political debate about the alleged economic mismanagement by the Government in which the prediction is regarded as guaranteed to be 100% accurate. Madness.

naillik48
naillik48
3 years ago

Predictive models are only as good as their assumptions . Unfortunately it seems the Prime minister and close advisors were perhaps unaware of this when panicked by the highly dubious , non peer reviewed Imperial College model. Prof Ferguson is an attention seeker with a dismal record. His foot and mouth predictions caused a needless and hugely expensive catastrophe based on faulty assumptions. This behaviour has been rewarded with continuing research funding rather than been called to account and so he has been allowed to continue indulging himself.
That weekend of political panic will have economic consequences that will reverberate for a generation.
And all from a dodgy prediction – but surely the Government process that allowed him to contribute , given his appalling record , is massively culpable.

JR Stoker
JR Stoker
3 years ago
Reply to  naillik48

True enough, vis-a-vis Professor Ferguson. But which experts should we use to select our experts? (We know the one not to use now, at least)

Perdu En France
Perdu En France
3 years ago

Writer doesn’t seem to realise this, but he’s actually dealing with religion. Because what else is religion but an attempt to insert some certainty into an uncertain future? The priest says if you pray to his god, your prayers will be answered. Or, if you don’t abide by the rules of the religion, you will be damned & go to hell. People look for reassurance. Why they read horoscopes in papers. So rather than a god, they put their faith in the high priests of computer models. There’s little difference. Still mumbo-jumbo

Amy Beange
Amy Beange
3 years ago

It isn’t the same at all. Computer model predictions are statements about the future based on how things went in the past which is difficult given the variables involved. Religious predictions about the future are based on the knowledge of a being who is in control of the future who has deigned to reveal his plans to his creatures. If one has good reasons for positing the existence of such a being and the veracity of his revelation then trusting him when he says such and such will happen is perfectly reasonable.

johntshea2
johntshea2
3 years ago

An interesting article and another book for my towering To-Be-Read pile! I’ve long suspected predictions tell us more about the state of mind and general disposition of the predictor than anything else. For example, when economists predict whether and how fast an economy may recover after the Coronavirus they are really telling us whether they are optimists or pessimists.