X Close

Could you predict Brexit? It's in everyone's interests that our forecasts are more accurate

Credit: Jack Taylor / Getty

Credit: Jack Taylor / Getty

November 21, 2018   6 mins

At the moment, Britain is teetering on the edge of an enormous change, and we all want to know how it’s going to pan out. Will we leave the EU? Will May’s deal get the votes in the Commons? Will we crash out without a deal?

Predictions are hard, especially about the future, as Niels Bohr may have said. But that doesn’t stop people making them. May’s deal will get through, say some. She doesn’t have the votes, say others. Of course, if we do crash out, then the crystal ball gets even murkier. We might have to stockpile medicines. Energy prices might skyrocket. We might run out of Mars Bars.

Niels Bohr was right. Predictions are hard, and people are spectacularly bad at making them. But there are, also, some simple things you can do to get better at making them, in order to mitigate the consequences of whatever the future holds. I went on a workshop last week, with a bunch of senior and important people from various major banks and intergovernmental agencies, to learn how to do it better.

It was run by an organisation called the Good Judgment Project. The GJP was set up by Philip Tetlock who, in 1984, had noticed something. The young psychologist had joined an expert committee set up by the National Academy of Sciences to help prevent nuclear war. The experts were divided. Conservatives wanted to maintain a tough line against the Soviet Union; liberals thought that line was strengthening Kremlin hardliners.

When, a few months later, Mikhail Gorbachev took office and started opening the USSR up, Tetlock was surprised to see that everyone thought this showed they’d been right all along. The conservatives thought the tough line had pushed the Kremlin into action; the liberals thought it would have happened anyway, and that the hard line had just slowed things down. People all thought they were right, whatever had actually happened.

So Tetlock set up an experiment to see how good experts were at predicting the future. First, he had to tie them to falsifiable predictions. Pundits had, and have, a tendency to vague answers that don’t really pin them to anything: “Food shortages could be likely”, and so on. Tetlock gave them specific questions with clear dates: “Will the yen be higher than it is now against the deutschmark in one month’s time?” He then asked the pundits to give percentage values for how likely that was: there is a 75% chance that this will happen, a 60% chance, etc.

He had nearly 300 pundits make an average of 100 predictions each, and then saw how well they did over the coming months. If your 60% predictions came in 60% of the time, and your 35% predictions came in 35% of the time, you were “well calibrated”. You were also given points for being confident and right – 90% guesses got a higher score than 50% ones – but punished for being confident and wrong.

What he found was that pundits’ calibration, on average, was no better than random guessing – or, as Tetlock described it, “than a dart-throwing chimpanzee”. But some were significantly better. What predicted who was better was not whether they were conservative or liberal, or even, particularly, their expertise in the field, but how they approached the problem. People who assumed the world was simple and had simple solutions did badly; people who thought it was complex, who realised they could be wrong, and who learnt from mistakes did better. It also helped if you were good with numbers, and good at spotting patterns, as in IQ tests.

Tetlock would later set up the GJP, which runs tests to find the best forecasters and enters them in forecasting competitions against teams from intelligence agencies and universities. Teams of these “superforecasters” outperformed the best of the rest by a huge margin, even CIA analysts and academic experts.

The workshop that I attended couldn’t make everyone superforecasters. There is some inherent talent involved in being the best. But the one of the supers who ran it pointed out that not everyone can ever play at Carnegie Hall, but almost everyone can learn to play the violin. Practising the right skills can make anyone better.

This is a useful thing to do. Being a better forecaster, if you’re a major financial institution, lets you make better bets with your money, and so turn it into more money; if you’re a national security agency or an intergovernmental organisation, it lets you direct resources towards things that are likely to happen, and keep the world safer.

Some of the things you could do were obvious: invoking the “wisdom of the crowd”, for instance. The “ask the audience” option on Who Wants to Be a Millionaire comfortably outperforms the Phone A Friend option; more generally, if you ask 1,000 people to guess something, any one of them will probably be wrong, but on average they’ll all be wrong in different directions, so the average guess is usually close. And if you’re in a big group, you should take the median average, and then readjust your predictions, and take the average again. Doing so gives you a real, measurable improvement in forecasting.

Another is to use the sort of precise, numerical terms we talked about above. The superforecasters were highly critical of things like Sadiq Khan’s claim that no-deal Brexit is “more likely than ever” or Sir Vince Cable’s that the same was “highly improbable”. These don’t really tie you to anything: more likely than ever? How likely was it before? How likely is “highly improbable”? Is it a 5% chance? 10? Instead, write down a real percentage, and then revisit it afterwards, and see how it did. Otherwise you’ll find a way to convince yourself you were right all along, and you’ll never improve.

A third is to use the “outside view” as well as the “inside view”. For instance, say you’re at a wedding, and someone asks you how likely it is that the couple’s marriage will last until one of them dies. You look at them, they seem very happy, and you know them to be well-matched, so you say “90%”. That’s the “inside view”: looking at the specifics of the situation in front of you.

But taking the “outside view” involves looking at the whole class of similar events. So you might first think “About 40% of British marriages end in divorce,” and use the inside view to adjust your estimate from there, by taking into account other details that might change it from the base rate – social class, educational level, how long they stared into each other’s eyes when making their vows.

Fourth, you might break down the problem into smaller parts. How likely is it that we’ll have a Labour-led government before June next year? Well, the Government feels like it’s teetering on the brink at the moment, so you might think it’s pretty likely, so you might say 60%.

But if you break it down, Parliament would have to vote for a new election, which would probably mean at least some Tories voting for an election they could well lose, and then it would mean Labour forming the largest party. If you think how likely each of those things are individually – I’d say an election before June next year is at best 50% likely, and that Labour’s odds of winning aren’t much better than 50% either – and then multiply them out, you get 25%. That’s called a “Fermi estimate”, and again, using this has been shown to improve accuracy.

There are a few other things that help – I won’t list them all, but one is, if you have a group, to try to make sure that it’s got diverse viewpoints, so it’s not just teachers or financial analysts (or just liberals or just conservatives), but a mix of many types. You also ought to make sure that the loudest or highest-status person in the group doesn’t dominate, so that the “wisdom of the crowd” doesn’t get drowned out.

We weren’t a bunch of superforecasters. We were just ordinary people, albeit some of us (not me) from some quite important financial, academic and governmental organisations. But we had a stab at it, and put best-guess estimates on some things, and then shared them and readjusted them as we were supposed to. And, after doing all that, in answer to the question “Will a Brexit withdrawal agreement obtain the consent of the European Parliament by 1 April 2019?” – a specific and falsifiable version of the question “Will Britain agree a Brexit deal?” – we figured that there was a 60% chance.

That seems scary enough to me – that there’s a 40% chance of either a no-deal collapse or another referendum. And when we did it, our guess roughly matched that of the superforecasters. They wouldn’t tell us exactly what their numbers were, because it’s all privileged information, but I’m told we weren’t far off their guess.

That was over a week ago, though. Since then, the deal has been unveiled, Cabinet ministers have resigned, the ERG has unsuccessfully launched a leadership challenge, and the Democratic Unionist Party have all but ended the deal with the Tories that gives May her majority. Again, I don’t know exactly what the superforecasters’ numbers are, but that 60%-ish figure that we came up with now looks deeply optimistic.

The three main options – leaving with a deal, leaving without a deal, and not leaving at all, by 1 April 2019 – are all, I understand, roughly equal in probability. The Good Judgment Project is, verifiably, the best forecasting outfit in the world, but even for them it is, pretty much, a complete tossup.

So, yes. Predictions are hard, especially about the future. You can find ways of making it easier. But it looks like when it comes to Brexit, we really are all in the dark.

Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.


Join the discussion

Join like minded readers that support our journalism by becoming a paid subscriber

To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Notify of

Inline Feedbacks
View all comments