The 2017 general election was a surprise – it was a surprise when it was called, and it was a surprise when the result came in.
After a similarly surprising result in 2015, many a commentator lamented that they had given too much attention to polls and not enough to people. As a pollster, that grated, since talking to people is the very definition of what polling is. For all the bells and whistles we dress it up in, at its heart an opinion poll is just asking 1,000 or so people, among a representative sample, how they will vote.
In an ideal world, then, an opinion poll is a guard against groupthink – a way of checking your assumptions against actual data. The problem was that after the polling failure of 2015, journalists and politicians viewed such data with scepticism (and given many of the polls turned out to be wrong, they were quite right to do so).
It’s a reasonable response, no one wants to make the same mistake twice, but it meant that in 2017, opinion polling couldn’t act as a corrective. Where once upon a time, an unusual polling result might have made people second guess their existing predictions, now it was easy to dismiss as the “polls being wrong again”.
This tendency was also visible in the Brexit referendum. You will often hear that the polls were wrong, and indeed many of them were, but not all of them. Crucially, the commentariat paid far more attention to those polls they thought were correct (showing big Remain leads) to those they thought were wrong (showing a tight race). A form of groupthink blinded them to the fact that almost half of the polls during the campaign actually put Leave ahead.
In 2017, pollsters adopted various different methods to try to correct the errors of 2015. Inevitably, not all these worked, and the majority of polling companies ended up overstating the Conservative lead, in some cases by a long way.
However, some polls got it right – most notably the YouGov MRP model and Survation’s poll – but since they were seen as outliers, they were not believed. When YouGov first published the MRP model it was actively mocked: the Conservative party’s strategist Jim Messina tweeted that he had “Spent the day laughing at yet another stupid poll from YouGov”. And people scoffed at the absurdity of Labour winning Kensington or Canterbury. In the event, of course, these turned out to be correct.
It is easy to be wise with the benefit of hindsight. But back in April 2016 it was clear why commentators doubted those polls that showed a tight race. After all, whenever polls had got it wrong in the past they had overstated Labour support, not understated it. Even we pollsters were full of doubt. While we do everything we can to check methodological changes, there is no way of running a “test election”. The only way pollsters can really tell if a methodological innovation works or not is to try it – live.
Saying the 2017 election campaign was a shock result, however, isn’t simply commentators making excuses. It was a genuinely unusual election that overturned much of our past understanding of elections. Normally there is relatively little change in support during campaigns. Yet in 2017, Labour turned around their position and significantly increased their support.
Normally the specific policies that are announced in general election campaigns make little difference – they have been largely pre-announced and carefully prepared, and are either already priced into public support, or cancel each other out. The manifestos in 2017 had policies that people hadn’t seen yet and, in the case of the Conservative manifesto, were unpopular enough to have a genuine impact.
The 2017 election was another reminder that polls can sometimes be wrong. But also a reminder that we need to be cautious not to discard them entirely. Even those polls that got the scale of the Tory lead wrong were providing useful information – Labour increasing their support, Theresa May’s popularity falling, perceptions of Jeremy Corbyn improving. The levels may have been off, but the direction of travel was unquestionably correct.
Take away polling and you’re not left with other, superior methods of measuring public opinion; you are left with guesswork, instinct and your own biases. So yes, be sceptical of polls, but don’t fall into the trap of only being sceptical of the polls that challenge your existing beliefs, while assuming those that agree with your pre-conceptions must be correct.