A recent study has criticised the reliability of Covid-19 projections made by SPI-M, a subgroup of the UK’s Scientific Advisory Group for Emergencies (SAGE), suggesting that some forecasts were so inaccurate as to be ineffective for planning and decision-making purposes.
The paper, published in the journal Global Epidemiology, examines two key failures in SAGE’s predictive modelling: one in July 2021 during the Delta wave, and another in December of that year with the emergence of the Omicron variant. In both cases the forecasts, widely relied upon by policymakers, were either too vague or significantly off target.
Ahead of the so-called “Freedom Day” in July 2021, SAGE forecast that daily hospitalisations could range from 100 to 10,000, and warned cases would “almost certainly remain extremely high for the rest of the summer”. Instead, hospitalisations peaked at about 1,000 per day, while cases began to decline shortly after restrictions were lifted, more than 10 times below the upper bound, diverging sharply from predictions, the study found.
Meanwhile, in December 2021, SAGE warned that under “Plan B” restrictions — which the Government ultimately maintained — daily deaths could peak between 600 and 6,000. The eventual peak was just 202, falling far below the lower bound of the prediction.
The study’s authors attribute these failures to SAGE’s over-reliance on mechanistic modelling, which simulates disease dynamics based on theoretical assumptions. While mechanistic models are useful in assessing intervention impacts, they depend heavily on high-quality data, which was often unavailable or inconsistent during the crisis.
In contrast, the authors cite the South African Covid-19 Modelling Consortium (SACMC), which adopted a more flexible and diverse approach, using simpler descriptive models alongside mechanistic ones. Despite being far less resourced than SAGE, the authors claim that SACMC delivered significantly more accurate projections, particularly during the Omicron wave.
Over the course of the pandemic, the UK experienced fluctuating infection rates, hospitalisations, and deaths, with notable peaks during both the first and second (Alpha) waves. At the height of the third Delta wave in mid-2021, the country saw daily case numbers exceeding 40,000, with hospital admissions rising in some areas. By the time of the Omicron variant’s surge in late-2021, infections soared once again, yet hospital admissions and death rates remained much lower than the first two waves.
The findings highlight shortcomings that impacted not only UK policymaking but also global public health strategies, given Imperial College London’s influence as the World Health Organization’s sole collaborating centre for infectious disease modelling. “Had SAGE adopted a methodologically pluralistic approach, many of these errors could have been avoided,” the study argues.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribeother surprises.. tomorrow is Christmas…
And water is wet.
Sun expected to rise in the East.
Are you sure?
Do you have any data from ONS and Imperial College modelling to support that?
Models. Garbage in. Garbage out
“All models are wrong, some are useful.” But the trick is knowing which ones.
The problem with epidemiology* is that it includes a lot of physicists who weren’t good enough to pursue a career in research, but whose ability to solve partial differential equations looks like magic to some of their colleagues in their adopted subject. Prof Ferguson (than whom I have greater medical qualifications, namely GCSE Biology) being a particularly egregious example.
* see also: Climate Science
Climate change models article ready to go for 2030 printing
Jim Veenbaas said:
It was worse than that. Even the Imperial College coding of their model produced unintended random results (no doubt due to some dreaded undefined variables). So it was more like “anything in, garbage out”.
The more complex a model is, the more ways it can go wrong. A missing variable here, a slightly too low estimate there, an assumption that doesn’t quite hold in all circumstances. Such small errors add up in a complex system. A computer model is a lot like an amplifier. An amplifier takes a small signal at a certain frequency and multiplies it to make it louder, but it’s just a black box. It amplifies whatever is there, both the signal and the noise. This is why if you put a microphone next to its own output speaker, you get that horrible screeching sound. That sound is the sum total of tiny background noises in the environment and electrical interference within the devices amplified in the output where it then goes into the input and gets amplified again and then the process repeats in an endless cycle limited only by the overall power of the system. You eventually just get to the maximum volume the system can handle and it stays at that maximum until the device fails, which may be immediately or take a very long time, depending on how robust the design is. A computer model does more complex calculations than simple multiplication, but it is subject to the same rule that whatever is put in, be it good data or nonsense, gets output at the other end. Further, the complex calculations aren’t as intuitive as a simple amplifier, so instead of screeching you get something that could just be wrong in any number of ways. Keep putting the output from a computer model back in as the new input, and the errors and noise will likewise multiply. Many computer models do hundreds or thousands of these iterations to produce their results. It can be hard to know how much of what you’re observing in the output is useful data and how much is just errors and extraneous information that has been run through a serious of calculations and spat back out in some other form. Even when a model is verified by empirical data, the possibility exists that this is coincidental rather than an indication of an accurate model and even if the model is accurate for a present set of conditions, it might not be under a different set. On the whole, maybe these models are helping us understand our world and better predict future possibilities or maybe we’re just chasing programming phantoms and boxing our own shadows. I personally question basically everything we’ve ‘learned’ from computer modeling since the technique was invented.
The data going into the Imperial College model was shaky, but not totally garbage. During a public health emergency you have to work with whatever data you can lay your hands on. The results of the model, though, were definitely garbage. I was a sub-contractor at WHO during the pandemic, working on a related project. I needed to understand the Imperial College model and I was appalled by what I saw. The coding was so amateurish. And the Imperial College team acted with the sort of air of entitlement that would have got them booted off the project, had it been run on commercial lines.
Bias in bias out for the most part.
What about cost to taxpayers and society of these garbage.
Many trillions.
Not just covid farce but climate emergency and net zero.
Data in, corrupt algorithm, garbage out.
Is anyone shocked to hear that Covid, while a real and serious disease in some demographics, was wildly overestimated as a threat? It was understandable at first, when it was largely an unknown quantity, but as the truth became obvious, drunk-with-power authorities refused to back away from their enjoyable control of everything.
“The authorities” in this case largely being an alliance of journalists (BBC) and activists (e.g. Independent SAGE). They swept all before them in a climate of mass hysteria.
In other developments, the sun rose in East this morning. This is not to belittle this story, which is necessary and should be shoved down the throats of the millions of Covidians who terrorized everyone else.
Safetysim and fear porn trumped common sense and logic, not to mention the demonization of proven remedies. And. Nothing. Else. Will. Happen.
Ain’t that the sad truth….
I am not sure it was just that.
It was done on purpose to to brainwash people to obey authority unconditionally.
Plus it made many already rich, well connected people even richer.
“Imperial College London’s influence as the World Health Organization’s sole collaborating centre for infectious disease modelling”.
Staggering that the WHO relied entirely on models produced by the man with with the worst track record in the business of pandemic forecasting (foot and mouth, swine flu, bird flu, SARS, MERS), Prof. Neil Ferguson. Not just wrong but by a country mile.
The man has a great deal to answer for, and not just breaking lockdown rules he was instrumental in imposing for a bit of rumpy-pumpy. The useless UK Covid inquiry will hold no-one to account, but ‘lessons will be learned’ for future pandemics for which management will be handed over to, God help us…..the WHO.
Credit where it’s due: Bojo refused to be bullied into increasing restrictions by Vallance and Whitty on both occasions.
Yes people forget about good side of Boris and that Labour was pushing for stricter and longer restrictions so public sector parasites can enjoy their extended vacations.
When travelling to HiFi show in Munich in May 2022 we had to wear FFP2 masks as minimum and my mate was nearly fined 80 Euros for having just paper one.
I believe the modellers were physicists?
Physicists cannot do statistics since they are used to having fantastically accurate mechanistic equations already in place.
With soft data only the simplest models with the fewest parameters are supportable.
You’re thinking of the Imperial College modellers. That team was headed up by Neil Ferguson, who is a physicist and certainly not an epidemiologist. He’s also an in inept computer programmer (again, no relevant qualifications) with a history of pandemic projections that could be best described as “not even wrong.”
The source code for the IC model is available on git hub for anyone who wants a laugh (or cry) at just how utterly useless the “experts” at the centre of these decisions really are.
Remember, SARS-CoV-2 is a seasonal respiratory virus but IC decided not to model seasonality and thus projected huge numbers of infections in July. This was dutifully observed by the government and used to continue most lock down policies over the summer.
You may be referring to the person who took the lead on much of the modelling. Interestingly a search shows only one paper published in physics and 400+ papers since then in epidemiology.
Prof Fergusson admitted in interview with FT that he moved away from physics because he was never going to be seriously good at physics.
So maybe it is blessing in disguise that he was not modelling nuclear weapons or nuclear reactors?
“I believe the modellers were physicists?” It was even worse here in Scotland. Nicola Sturgeon took advice from an Edinburgh Uni prof., Devi Sridhar, who is an anthropologist (but ticked lots of woke boxes and said just what Sturgeon wanted to hear). So not surprising that outcomes were even worse in Scotland than in England.
Modelling is meant to be a starting point, not a end point used to make broad policy decisions. We are suffering from a lot of terrible decisions made on the basis of modelling.
“Had SAGE adopted a methodologically pluralistic approach, many of these errors could have been avoided,” the study argues.
“The failure to adapt and use diverse methods during a pandemic is not just a missed opportunity,” the researchers write. “It is a risk we can’t afford to repeat in future global health crises.”
Bit of ‘horse bolted and stable doors’ syndrome here.
I’m not sure about SAGE modelling but here’s my diary entry for 24 January 2020, just a week before the first confirmed case in the UK, which seems to indicate the the ‘headless chickens’ methodology was closer to the UK’s approach to what was to become the Covid-19 pandemic:
“In a tweet on 24 January 2020, that will go down as one of the classic understatements of the current era, Richard Horton, Editor of the Lancet with 106,200 followers accused the media of “escalating anxiety by talking of a ‘killer virus’. In truth, from what we currently know, 2019-nCoV has moderate transmissibility and relatively low pathogenicity. There is no reason to foster panic with exaggerated language.”
January 24th 2020 was just seven days before the first confirmed Covid case in the UK.
On 10th January 2020 I emailed friends in China from my home in England:
‘Great news about my ticket for 27 January.
Not sure if you can get this link so here’s the start of an article about a new virus:
“That Mystery Disease Outbreak in China Could Be Caused by a Newly Discovered Virus
“China believes a mysterious pneumonia outbreak that struck 59 people is caused by a new strain of virus from the same family as SARS, which killed hundreds of people more than a decade ago.
Lead scientist Xu Jianguo told the official Xinhua news agency that experts had “preliminarily determined” a new type of coronavirus was behind the outbreak, first confirmed on December 31 in Wuhan, a central Chinese city with a population of more than 11 million.
It initially sparked fears of a resurgence of highly contagious Sudden Acute Respiratory Syndrome (SARS), and prompted authorities in Hong Kong – badly hit by SARS in 2002-2003 – to take precautions, including stepping up the disinfection of trains and airplanes, and checks of passengers. (It coincided with Spring Festival holiday when it’s said one half of China travels to visit the other half.)
China has since ruled out a fresh outbreak of SARS, which killed 349 people in mainland China and another 299 in Hong Kong.
“A total of 15 positive results of the new type of coronavirus (later to be named Covid-19) had been detected” in the lab, through tests on infected blood samples and throat swabs, Xu said.
The World Health Organization (WHO) confirmed the preliminary discovery of a new coronavirus in a statement.
“Further investigations are also required to determine the source, modes of transmission, extent of infection and countermeasures implemented,” said Gauden Galea, the WHO Representative to China”.
I concluded:
“There’s more of the article but this is enough to give you the information you need.
So unless it’s an absolute emergency keep away from hospitals and crowded places”.
Clearly v important lessons particularly given what we know now about the many costs. That said would one as a Policy decider lean on side of caution or take the chance risk overstated? And remember is without the benefit of hindsight. The quality of advice is obviously critical but it did not and could not determine all Policy decisions.
A couple of the Author comments on predictions don’t clarify if the modellers had taken sufficient account of impact of the Vaccine. Certainly the Dec 20 modelling had not, but it became fairly clear it was making quite a difference to the serious hospitalisation rate, if not the overall infection rate, as it was rolled out that winter. And it was the former that drove Lockdowns.
The CDC report that as of December this year, only 20% of adults had been vaccinated against the 2024/2025 variant.
I’m not a doctor, but surely such a low rate of vaccination is going to lead to an explosion in COVID numbers.
I didn’t realise the number of Covid-zombies was that high!
The irony of the mRNA jab is that not only is it highly dangerous, but those who take it are rendered more susceptible to Covid, not less.
No one is stopping you having vaccines.
What people like me object to is being forced to participate in untested and ineffective medical procedures.
You surely know by now that covid pseudo vaccines do not stop you getting covid and do not stop transmissions.
They have terrible side effects for many people (about one in 800), especially young, who are NOT susceptible to covid.
So in the absence of good data it was impossible to give precise answers. Duh!. Meanwhile the author underlines how the eventual result was ten times fewer deaths than the upper bound (shock! horror!) without noticing that it was also ten times higher than the lower bound. Were the South Africans closer to the mark because they had better methods, or because they made a guess, got lucky, and were cherrypicked by the article?
For anything like this, can we please get a link to the article, and at least some mention of the authors? In this field there are so many people with axes to grind that articles are worthless unless you can evaluate the biases of those who wrote them.
There is a link to the article in the second paragraph. Here it is. https://www.sciencedirect.com/science/article/pii/S2590113324000439.
The truth Rasmus is that the authorities in the UK and US panicked because of the Imperial College Ferguson paper which was total BS (as were all his previous predictions which led to panicked reactions). Had the powers that be in the UK and US actually studied what was coming out of China and followed the Swedish lead on Anders Tegnell we would all have been a lot better off. In the meantime the behavioral pathologies generated by the hysterical responses persist. What is also perhaps surprising is that the more credentialed the individual the more likely they were to simply accept what the authorities were saying rather than truly investigate for themselves and uncovering the real facts. You are a typical example of just such a person.
Absolutely – they panicked, and way overreacted, as though the world had never seen a pandemic before. As an initial reaction it was perhaps understandable, but what was unconscionable was that they didn’t course correct after a few weeks, but instead got caught up in the momentum of their initial panic and blew away money hand over fist for the best part of two years.
And what is worse, in my view, is that none of the people in authority (e.g. Fauci, Collins, Whitty and Vallance) have the gumption to admit they were wrong. i.e. these were small men basking in the limelight and climbing up the greasy poll (as measured by knighthoods and lordships, as well as election to the Royal Society (for what, god knows) in the case of Whitty and Vallance, and prizes, recognition and adulation by the elite class and fully captured learned societies such as the US National Academy of Sciences.
There was a definite element of self-fulfilling prophecy.
The real life experiment of the Diamond Princess cruise ship demonstrated a worst case scenario that was nothing like as bad as any of the modelling, long beforethe first lockdown. Why was real-world evidence ignored in favour of a garbage modeller (Fergusson) with a decades long track record of garbage models leading to costly panicked policy errors?
Did the report put probabilities against the periodicities I wonder. And did they explain their statistical methodologies, like classical, or bayesian or gaussian etc, and a whole host of assumptions about the frameworks they operated under?
To my eyes, an estimate with an upper bound and a lower bound that straddles an order of magnitude without probabilities is meaningless, and would not give me any confidence in the quality of the estimate.
And this is the point: given that the bulk of the decision makers in Cabinet wouldn’t know for example, the meaning of a bayesian study from BJs ample backside, the final interpretation would then fall back on the bun fight in the background between numbers of tecchies, before the gang of senior figures like Whitty et al make a decision between them on what they are hearing and take their preferred interpretation to the political decision makers, who would then have pretty much no choice but to accept what they are being told. My objection here is not to the multiple voices in the background at all but the fact that the emphasis turns on process, which then can push interpretation as a consensus between colleagues who know they have to work together for a long time in a civil service capacity. My other objection is, exactly what have the political figures contributed in the final decisions, except to rubber stamp them?
Of course the decision makers have a choice…that’s their job, to decide if the experts are probably talking total bol**cks!
After the Bay of Pigs disaster, Kennedy reputedly said ” all my life I have known better than to trust the experts, why the **** did I do it now?”
Thank god he didn’t trust them during the Cuban Missile Crisis.
It’s called leadership…and we don’t have anyone in charge with that quality.
It’s not a competition; S. Africa v. UK, like a test match. Each organization is tasked with doing the best job they can. Many of them fell short.
Also, the medical staffs never had a consistent definition of “death by Covid” so all of the data is inherently suspect. Here in the US many people working in the hospitals came right out and said that all deaths with Covid were listed as deaths from Covid.
And also, as you suggest, the actual issue was how these statistics were (mis)used to support the “punishment theory” of public health: painful interventions will end when people stop suffering. Some of the same experts have been itching for the next pandemic ever since. Their biases are plain to see.
They didn’t even manage a consistent definition of Covid death in the UK, never mind trying to compare it with other nations. I know at one point they slashed about 20,000 deaths off after deciding that a single failed Covid test ever was probably not very reliable, instead limiting it to something like 4 weeks.
I do remember the story in the US of an individual who died in a motorcycle accident being recorded as a covid fatality due to having failed a test. When challenged it was justified due to that fact it may have impaired his ability to ride a bike.