Unlike vague statements made by commentators, we made testable predictions
It’s starting to feel like the end of the beginning of the pandemic, with news that a vaccine for Covid-19 has been approved in the UK and that two are set to be authorised imminently in the US. And yet, until very recently, many commentators, politicians and experts were incredulous that such a turn of events would be possible so soon — in their minds, the idea that a vaccine would be approved in less than 18 months — let alone in under a year since its design — was completely fanciful.
At the same time, I had been telling friends in May that vaccines would most likely be authorised for emergency use by the end of the year, and in July, a McKinsey report came to the same conclusion. In August, I explained in detail why I believed a rapid timeline was very likely, with the most likely outcome being that a vaccine would be approved and distributed in enough doses to vaccinate 25 million people in the US around February 2021. My friend and fellow forecaster Jonathon Kitson predicted in October that a vaccine would be approved in the UK in December and rolled out in January.
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
These weren’t random guesses. There were very good reasons to be optimistic even back in the spring. There was the observation that most patients were able to clear the virus from their body, suggesting that this immune response could be primed using a vaccine. There was the slow mutation rate of the virus, which made it more likely that a working vaccine would protect us against multiple strains. There was the fact that funding was at an unprecedented level, that the virus’ genome had been sequenced and shared publicly incredibly swiftly, and that trials were running multiple phases in parallel. There were already vaccines for other coronaviruses developed for use in animals, and there were more vaccines in development for this single disease than ever before.
But there were also uncertainties: would the disease remain pervasive, allowing scientists to test whether a vaccine would protect the participants in clinical trials from the disease? And how safe and effective would the frontrunner vaccines turn out to be, given some of them used new technology?
As the summer went on, these uncertainties narrowed down: the disease did remain pervasive in countries where trials were ongoing, and the results from phase one and two trials were fairly promising. So it’s not surprising that many forecasters updated their forecasts according to new developments.
You might ask, however, why those predictions changed quite so much. In May this year, superforecasters rated the possibility that a vaccine would be approved and distributed at large-scale by April 2021 as having only a 5% chance, on average. By September, that forecast rose dramatically to 70%, and then leapt to 98% by December.
Shouldn’t superforecasters have predicted that those crucial but uncertain developments — the prevalence of the disease and the results from trials — would probably occur? And if they didn’t, does that make their forecasts useless?
Research indeed suggests that forecasters who make detailed initial forecasts and then update incrementally to new information are more reliable than those who make large swings in their predictions. So we shouldn’t simply take the average of the predictions that superforecasters gave, but we should scrutinise how each forecast was made: How did they compile the evidence and justify their reasoning? How did they quantify their uncertainty?
I have my own hunches about why most forecasts on vaccine timelines were so pessimistic. It seems likely to me that they were swayed by statements from experts, who were trying to manage expectations, and that they were anchored to the average timelines of vaccine development in “peacetime”, which were inappropriate during a pandemic. But in my view, these biases and challenges are all the more reason to follow the principles of superforecasting.
Unlike the vague statements made by political commentators, predictions made by superforecasters are testable because they are formalised. They are thorough and quantitative predictions on whether events will occur. They are forecasts that can be scored on accuracy, enabling an outsider to identify which specific forecasters made reliable predictions early on. In sum, they help us separate the signal from the noise.