A recent study has criticised the reliability of Covid-19 projections made by SPI-M, a subgroup of the UK’s Scientific Advisory Group for Emergencies (SAGE), suggesting that some forecasts were so inaccurate as to be ineffective for planning and decision-making purposes.
The paper, published in the journal Global Epidemiology, examines two key failures in SAGE’s predictive modelling: one in July 2021 during the Delta wave, and another in December of that year with the emergence of the Omicron variant. In both cases the forecasts, widely relied upon by policymakers, were either too vague or significantly off target.
Ahead of the so-called “Freedom Day” in July 2021, SAGE forecast that daily hospitalisations could range from 100 to 10,000, and warned cases would “almost certainly remain extremely high for the rest of the summer”. Instead, hospitalisations peaked at about 1,000 per day, while cases began to decline shortly after restrictions were lifted, more than 10 times below the upper bound, diverging sharply from predictions, the study found.
Meanwhile, in December 2021, SAGE warned that under “Plan B” restrictions — which the Government ultimately maintained — daily deaths could peak between 600 and 6,000. The eventual peak was just 202, falling far below the lower bound of the prediction.
The study’s authors attribute these failures to SAGE’s over-reliance on mechanistic modelling, which simulates disease dynamics based on theoretical assumptions. While mechanistic models are useful in assessing intervention impacts, they depend heavily on high-quality data, which was often unavailable or inconsistent during the crisis.
In contrast, the authors cite the South African Covid-19 Modelling Consortium (SACMC), which adopted a more flexible and diverse approach, using simpler descriptive models alongside mechanistic ones. Despite being far less resourced than SAGE, the authors claim that SACMC delivered significantly more accurate projections, particularly during the Omicron wave.
Over the course of the pandemic, the UK experienced fluctuating infection rates, hospitalisations, and deaths, with notable peaks during both the first and second (Alpha) waves. At the height of the third Delta wave in mid-2021, the country saw daily case numbers exceeding 40,000, with hospital admissions rising in some areas. By the time of the Omicron variant’s surge in late-2021, infections soared once again, yet hospital admissions and death rates remained much lower than the first two waves.
The findings highlight shortcomings that impacted not only UK policymaking but also global public health strategies, given Imperial College London’s influence as the World Health Organization’s sole collaborating centre for infectious disease modelling. “Had SAGE adopted a methodologically pluralistic approach, many of these errors could have been avoided,” the study argues.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeSo in the absence of good data it was impossible to give precise answers. Duh!. Meanwhile the author underlines how the eventual result was ten times fewer deaths than the upper bound (shock! horror!) without noticing that it was also ten times higher than the lower bound. Were the South Africans closer to the mark because they had better methods, or because they made a guess, got lucky, and were cherrypicked by the article?
For anything like this, can we please get a link to the article, and at least some mention of the authors? In this field there are so many people with axes to grind that articles are worthless unless you can evaluate the biases of those who wrote them.
There is a link to the article in the second paragraph. Here it is. https://www.sciencedirect.com/science/article/pii/S2590113324000439.
Sorry, missed that. Having skimmed it (no time for more) this is clearly a serious contribution to the scientific discussion and deserves attention.
The truth Rasmus is that the authorities in the UK and US panicked because of the Imperial College Ferguson paper which was total BS (as were all his previous predictions which led to panicked reactions). Had the powers that be in the UK and US actually studied what was coming out of China and followed the Swedish lead on Anders Tegnell we would all have been a lot better off. In the meantime the behavioral pathologies generated by the hysterical responses persist. What is also perhaps surprising is that the more credentialed the individual the more likely they were to simply accept what the authorities were saying rather than truly investigate for themselves and uncovering the real facts. You are a typical example of just such a person.
Absolutely – they panicked, and way overreacted, as though the world had never seen a pandemic before. As an initial reaction it was perhaps understandable, but what was unconscionable was that they didn’t course correct after a few weeks, but instead got caught up in the momentum of their initial panic and blew away money hand over fist for the best part of two years.
And what is worse, in my view, is that none of the people in authority (e.g. Fauci, Collins, Whitty and Vallance) have the gumption to admit they were wrong. i.e. these were small men basking in the limelight and climbing up the greasy poll (as measured by knighthoods and lordships, as well as election to the Royal Society (for what, god knows) in the case of Whitty and Vallance, and prizes, recognition and adulation by the elite class and fully captured learned societies such as the US National Academy of Sciences.
There was a definite element of self-fulfilling prophecy.
The real life experiment of the Diamond Princess cruise ship demonstrated a worst case scenario that was nothing like as bad as any of the modelling, long before the first lockdown. Why was real-world evidence ignored in favour of a garbage modeller (Fergusson) with a decades long track record of garbage models leading to costly panicked policy errors?
Well, the truth bout Tegnell is that he just followed his preconceived plan without taking account of COVID-specific facts or additional risks. And the truth about Sweden is that (unlike for instance Denmark) the government completely abdicated responsibility and left their health bureaucracy in charge. If you prefer Tegnell’s canned response to the one that developed in the UK – or most of the rest of Europe – that is just fine. Just do not use Sweden as the poster boy for ‘investigating for yourself and uncovering the real facts‘.
Rasmus, has anybody ever told you that a little knowledge is dangerous. First, there were no COVID-specific facts that were different from any other influenza-like-illness. Second, and most importantly, if you look at excess deaths over the entire course of the pandemic (not the 1st year) you will find that Sweden did better than anybody else including all its scandinavian neighbors. It is excess deaths over the entire course of the pandemic that counts. Third, Tegnell’s response wasn’t canned but based on actual data rather than panic: namely the data from the Diamond Princess (a worst case scenario) and from Wuhan. Tegnell immediately realized that there was a huge difference in riskm increasing exponentially with age above 70 and multiple co-morbidities. Once you knew that Tegnbell’s approach was the only one to follow. In the meantime everybody else, panicked by Ferguson et al at Imperial, decided to do things that were known to be completely useless, destroying children’s lives and the economy in the process. All I can say is that you need to learn to think laterally and see the entire picture, before parroting the stratus quo of the so-called experts who were anything but.
Did the report put probabilities against the periodicities I wonder. And did they explain their statistical methodologies, like classical, or bayesian or gaussian etc, and a whole host of assumptions about the frameworks they operated under?
To my eyes, an estimate with an upper bound and a lower bound that straddles an order of magnitude without probabilities is meaningless, and would not give me any confidence in the quality of the estimate.
And this is the point: given that the bulk of the decision makers in Cabinet wouldn’t know for example, the meaning of a bayesian study from BJs ample backside, the final interpretation would then fall back on the bun fight in the background between numbers of tecchies, before the gang of senior figures like Whitty et al make a decision between them on what they are hearing and take their preferred interpretation to the political decision makers, who would then have pretty much no choice but to accept what they are being told. My objection here is not to the multiple voices in the background at all but the fact that the emphasis turns on process, which then can push interpretation as a consensus between colleagues who know they have to work together for a long time in a civil service capacity. My other objection is, exactly what have the political figures contributed in the final decisions, except to rubber stamp them?
Of course the decision makers have a choice…that’s their job, to decide if the experts are probably talking total bol**cks!
After the Bay of Pigs disaster, Kennedy reputedly said ” all my life I have known better than to trust the experts, why the **** did I do it now?”
Thank god he didn’t trust them during the Cuban Missile Crisis.
It’s called leadership…and we don’t have anyone in charge with that quality.
We may agree on more than you would think. It is absolutely the politicians who need to decide, after having demanded and got all the necessary estimates. Not because they understand better, but because they are(presumably) better trained in taking decisions under high uncertainty. The problem is absolutely not that the scientists push a consensus interpretation – having one opinionated group decide to override the rest (be it Ferguson or the Barringtoners) would be a disaster. Rather, scientists are trained to seek understanding, and to trust whatever understanding they have till something better comes along. Which is fine if you can wait ten years till it all shakes out, but when you need decisions right now with insufficient data it leads to overconfidence. Which was very much nin evidence also from Tegnell and the Barringtoners.
Funny enough it was Sweden and Denmark that convinced me that scientists should not have the final decision. In both countries the scientific consensus was that you should follow the Influenza plan and there was no need to change anything. The Danish Tegnell pronounced at a early stage that there was no reason to worry. COVID would never spread to Europe, if it did there would not be many cases, and if there were, very few people would die. Just business as usual. The difference was that in Sweden the government abdicated responsibility wholesale and left it to Tegnell to follow his preconceived ideas. In Denmark the government stepped in and closed borders, locked things down and (regrettably) killed off all the mink and the Danish fur industry. Not all these decisions were right, but at least the Danes recognised that the downside risk was too high for a do-nothing policy. Which is what politicians are for.
Clearly Rasmus you have very little insight into scientific experts, the majority of whom are anything but. Perhaps that’s why 99% of published research by so-called scientists is nothing more than me-too research, turning the handle. The facts are very simple. Tegnell didn’t panic and he got it right, and that’s why Sweden had the lowest excess death rate of any 1st world country over the entire course of the pandemic.
Is it not wonderful how Tegnell and Johan Strauss manage to understand everything perfectly, so much superior to the 99% of the world’s scientists who are only producing rubbish.
Well that’s just perhaps because Tegnell got it right. Why don’t you at least admit, now that the pandemic has passed and the results are in, that Tegnell, as well as Battacharya, Gupta and Kuhldorf of the Great Barrington Declaration actually got it right. And while you set yourself up as a scientist, you clearly have little insight into the scientific process and what constitutes original versus me-too research. The truth is that major contributions are only made by a very very small number of scientists. For example, if one wants to look a bibliometrics, just look at the percentage of scientists who have an h index (number of papers h cited h or more times) in excess of 100. i.e. scientists with truly major impact. You will find that they constitute less than 0.1% of the total.
It’s not a competition; S. Africa v. UK, like a test match. Each organization is tasked with doing the best job they can. Many of them fell short.
Also, the medical staffs never had a consistent definition of “death by Covid” so all of the data is inherently suspect. Here in the US many people working in the hospitals came right out and said that all deaths with Covid were listed as deaths from Covid.
And also, as you suggest, the actual issue was how these statistics were (mis)used to support the “punishment theory” of public health: painful interventions will end when people stop suffering. Some of the same experts have been itching for the next pandemic ever since. Their biases are plain to see.
They didn’t even manage a consistent definition of Covid death in the UK, never mind trying to compare it with other nations. I know at one point they slashed about 20,000 deaths off after deciding that a single failed Covid test ever was probably not very reliable, instead limiting it to something like 4 weeks.
I do remember the story in the US of an individual who died in a motorcycle accident being recorded as a covid fatality due to having failed a test. When challenged it was justified due to that fact it may have impaired his ability to ride a bike.
And I remember the story of an individual having died in a traffic accident being counted as a fatality in an adverse vaccine events database – which recorded *all* adverse events post-vaccination. Both sides do it. When you do not have the time for careful analysis of each individual patient journal these approximations happen.
First, when did I say anything about vaccines? The only comments I’ve made here (and pretty much anywhere) is my disagreement with mandates. I think you’re straw – manning my position here.
Secondly, the point is actually fair, however, the people using a positive detection of SARS-CoV-2 as a cause of death are the authorities (scientific, medical and government institutions and bureaucracies). Those who use the same rational for vaccine deaths are called conspiracy theorists and crackpots. They are both wrong but one gets a pass the other receives ridicule.
Finally, they are not approximations; they are assumptions.
There is a difference here. COVID is known to kill people in fairly large numbers – look at Diamond Princess or early Italy if you doubt it. That being so, a rough best estimate of how many died may be the best we can do, and is not going to falsify the situation completely. Vaccination is *not* known to kill people, except in very small numbers, if at all, so you need more reliable evidence to prove that it does. Here the error of lumping in unrelated deaths is much more serious – it moves you from ‘benign and much better than the disease’, to ‘terribly dangerous to avoid at all costs’. Which, of course, is what people like Strauss want us to think.
You are again total BSing. No fatal traffic accidents were attributed to an adverse covid vaccine event, except perhaps in the British Sun. But there were many cases in the US of deaths due to fatal traffic accidents being labeled as Covid deaths because the individual tested positive for Covid. This was because hospitals were financially incentivized to do so.
Not so. A ‘death by covid’ is a death caused, to a large extent at least, by COVID infection. Full clarity. The problem si how to estimate that number from available data, when you do not have the luxury of waiting a year for complete data and ten years for scientific post-analysis.
That is pure gaslighting. You don’t estimate that number, it comes from the cause of death shown on the death certificate. Not only were Covid deaths ill-defined and inconsistently applied (death by Covid, from Covid, with Covid?…within 28 days of a failed PCR ‘test’ which in itself was nor designed to detect Covid?). Full clarity, my a**e!
But I’m sure you know this. Maybe you have some other agenda in consistently defending the indefensible?
The cause of death shown on the death certificate is itself an estimate. It takes at least some investigation to be sure exactly what happened. I do seem to remember counting the cases where COVID was registered as the main or an important contributory cause of death, but you need to at least test to know if the virus was present, and you need thinking to determine if the patient would ha survived in the absence of the virus or not. That is why the best estimate of the real number is not necessarily what appears on the certificates.
But then, if you claim that the COVID test was not designed to detect COVID, what are we discussing about here? Do you think the entire virus was a made-up conspiracy?
Do you know anything about death certificates. Perhaps read Carl Heneghan of the Oxford Center for Evidence Based Medicine to learn something. Stop defending massive mistakes and errors of judgement. It does no good. The idea is to learn from one’s mistakes. The US and UK royally screwed up. This was obvious very early on.
Do you know something abut death certificates? If you do, how about telling us, instead of just giving reading lists and insults? You know, participating in the debate? I might even learn something from you.
Give that you set yourself up as a scientific and medical expert but are anything but, there is no need to engage with you. You have been on the wrong side of the facts for the last 5 years. And that is a fact. If you want to know something about death certificates just go over and read Carl Heneghan and Tom Jefferson’s substack. As leaders of the Oxford University unit of Evidence Based Medicine, they do know about these things. Or you can just go to their web site which I’m sure you are capable of googling.
Unlike you, i do not set myself up as any kind of expert. I claim nothing more than having worked ‘in science’, much like a waiter works ‘in the restaurant business’. For the rest I stand and fall by my arguments. If they make sense, fine. If they do not, tell me why they are wrong. Or admit either that you cannot, or that you are too lazy to actually discuss anything.
You do know that you are talking total BS. Perhaps best to stop commenting about things you know nothing about. There is nothing more dangerous than a little knowledge masquerading as expertise.
Weasel words
Best we can do, I am afraid. What one would really want to know is whether the person would have died anyway if he had not caught COVID. But that is unknowable, even in principle.
other surprises.. tomorrow is Christmas…
And water is wet.
Sun expected to rise in the East.
Are you sure?
Do you have any data from ONS and Imperial College modelling to support that?
Models. Garbage in. Garbage out
“All models are wrong, some are useful.” But the trick is knowing which ones.
The problem with epidemiology* is that it includes a lot of physicists who weren’t good enough to pursue a career in research, but whose ability to solve partial differential equations looks like magic to some of their colleagues in their adopted subject. Prof Ferguson (than whom I have greater medical qualifications, namely GCSE Biology) being a particularly egregious example.
* see also: Climate Science
Climate change models article ready to go for 2030 printing
Jim Veenbaas said:
It was worse than that. Even the Imperial College coding of their model produced unintended random results (no doubt due to some dreaded undefined variables). So it was more like “anything in, garbage out”.
The more complex a model is, the more ways it can go wrong. A missing variable here, a slightly too low estimate there, an assumption that doesn’t quite hold in all circumstances. Such small errors add up in a complex system. A computer model is a lot like an amplifier. An amplifier takes a small signal at a certain frequency and multiplies it to make it louder, but it’s just a black box. It amplifies whatever is there, both the signal and the noise. This is why if you put a microphone next to its own output speaker, you get that horrible screeching sound. That sound is the sum total of tiny background noises in the environment and electrical interference within the devices amplified in the output where it then goes into the input and gets amplified again and then the process repeats in an endless cycle limited only by the overall power of the system. You eventually just get to the maximum volume the system can handle and it stays at that maximum until the device fails, which may be immediately or take a very long time, depending on how robust the design is. A computer model does more complex calculations than simple multiplication, but it is subject to the same rule that whatever is put in, be it good data or nonsense, gets output at the other end. Further, the complex calculations aren’t as intuitive as a simple amplifier, so instead of screeching you get something that could just be wrong in any number of ways. Keep putting the output from a computer model back in as the new input, and the errors and noise will likewise multiply. Many computer models do hundreds or thousands of these iterations to produce their results. It can be hard to know how much of what you’re observing in the output is useful data and how much is just errors and extraneous information that has been run through a serious of calculations and spat back out in some other form. Even when a model is verified by empirical data, the possibility exists that this is coincidental rather than an indication of an accurate model and even if the model is accurate for a present set of conditions, it might not be under a different set. On the whole, maybe these models are helping us understand our world and better predict future possibilities or maybe we’re just chasing programming phantoms and boxing our own shadows. I personally question basically everything we’ve ‘learned’ from computer modeling since the technique was invented.
I completely agree. I have coded and worked with complex computer models for half a century. “Right” answer for wrong reason is an ever present problem, over and above verifying the coding. Always use with caution. And have as many “sanity checks” as possible.
As with many things, I know just enough to know how much I don’t know and how much uncertainty there is, so I approach the topic with what I consider a healthy amount of skepticism. I think computer modeling has value, certainly, but should never be viewed as equal to measured empirical scientific data.
Amplifier does not sound like a good comparison. They *always* amplify whatever comes in, and computer models are not that bad. Sometimes they might actually filter out the noise – if you are lucky. The problem with models would rather be that they are too flexible, bound to overfitting, and depending on uncheckable assumptions. And the more complex the model, the better you can fit it – to reality? Or to whatever you want? As an example, the Danish Ministry of fisheries once produced a useful model of the North Sea ecology that showed that the kind of fish taken by Danish fishermen could bear much bigger fishing quotas. Only someone checked the details and found out that there were enough purely guessed and uncheckable parameters in the model that you could get any result you wanted just by tweaking them.
As for those Ferguson models they may well have had their limitations, but surely they are better than nothing. The discussion to have is what methods *in general* would have improved the result (as indeed this article does). Not “models are weak, I hate these policies therefore we should have followed my intuition instead”, which is what a lot of debaters are doing,.
The amplifier analogy is due to my background in the field. To someone who has taken a few classes in electronics engineering or telecommunications theories, there are a lot of things from the purely simple, like an amplifier, to the most complex, that work by doing something to a series of inputs and getting outputs from it. To some extent, all computing is a series of black boxes.
An amplifier is a simple black box that does one thing. A computer model is also a black box. It is more complicated and so can indeed filter out nose and compensate for likely errors. However, its complexity and its various error checking and noise cancellation can also make it hard to distinguish what constitutes an error. A screeching amplifier is easy to notice. A model whose prediction is wrong is much harder to notice in real time. The outcome being predicted may be something that hasn’t happened yet, and as mentioned before, just getting an accurate result one time isn’t validation. Science requires repeatability. A model must get accurate results many times with many different input variables and then verified with real world outcomes. That is the only valid scientific approach.
Humans, being impatient and wanting our answers now, are apt to latch only methods to predict the future before they’re fully verified and tested according to need. We NEED a model to predict COVID deaths and here is the one we have therefore we will use it because it’s all we have. There is logic in such a concession to pragmatism, but it isn’t science, and shouldn’t be held as such. Things should be honestly argued. If you want people to believe a COVID model with dire prediction of death totals and take extreme measures to prevent it, be honest and say there’s a chance the model could be wrong and things might not be that bad. If you can’t convince people, that’s the cost of a free society, my friend. You consistently seem to advocate Rasmus for some objective truth which can be known or at least reasonably guessed at based on the opinions of experts and the best minds on the matter and that when the popular view and this view come into conflict, the ‘correct’ view must always prevail. There is nothing inherently wrong with this view, so long as you make the argument honestly. You want the right decision made regardless of what the ‘mob’ believes. It’s a noble goal, but I’m not sure it’s so easy to reach as to pick a handful of trained scientists in a given field and put them in a room and task them with coming up with a policy. Moreover, you seem too keen to dismiss the ‘mob’ as a passive force that can be safely dismissed and ignored indefinitely without consequence, but the evidence suggests very strongly the opposite, that it is perilous to fail to consider popular opinion as well as expert opinion on any given issue. The need to make informed rational decisions for the collective good must be reconciled to the irrational nature of humanity in general. Any formulation of human government must come to some not quite satisfactory reconciliation of the irreconcilable.
Models are not better than nothing. What you need is careful observation. The numbers and predictions that Ferguson put out were based upon the values of various parameters that he put in. While his modeling may have been complex, and unecessarily so, everything can easily be described by the NIR and SIR models which are exceedingly simply. That’s sufficient to get all the insight one needs under various clearly stated assumptions. As it was, Ferguson’s track record of hysterical predictions is not simply disastrous from an academic perspective but from a real life perspective. His predictions on mad cow disease were off by many orders of magnitude and led to the completely unnecessary and barbaric culling of millions of cows (and perhaps recall that all life is precious and sacred). Ferguson’s predictions regarding Covid were also off by several orders of magnitude and led to hysterical and panicked reactions in the virtually all of the western world (with the exception of Sweden), with disastrous consequences and massive collateral damage from which will be hard to recover from.
The data going into the Imperial College model was shaky, but not totally garbage. During a public health emergency you have to work with whatever data you can lay your hands on. The results of the model, though, were definitely garbage. I was a sub-contractor at WHO during the pandemic, working on a related project. I needed to understand the Imperial College model and I was appalled by what I saw. The coding was so amateurish. And the Imperial College team acted with the sort of air of entitlement that would have got them booted off the project, had it been run on commercial lines.
Interesting – and I have no reason to doubt you. So, what would a better model have produced? What did your project produce at the time, that could have been used for decision-making?
Actually that may not be quite as heavy a condemnation as you would think. A lot of academic software is effectively in-house prototypes. Quicker to write, but hacky and deficient in documentation and maintainability. In part this is mitigated by the fact that academic coders and domain experts tend to be the same person (which is better for reliable content than for coding technology) and that the first users are experts too. We were once asked to take over a program that worked perfectly on the systems on which it had been tested (and which had led to publications), but where applying it to different systems required you to go into the FORTRAN source code and tweak various hardwired constants each time. We were pretty appalled too – an saw little scope for getting it to something releasable – but the original results that the program had been written to produce were not in doubt.
Bias in bias out for the most part.
What about cost to taxpayers and society of these garbage.
Many trillions.
Not just covid farce but climate emergency and net zero.
Data in, corrupt algorithm, garbage out.
“Had SAGE adopted a methodologically pluralistic approach, many of these errors could have been avoided,” the study argues.
“The failure to adapt and use diverse methods during a pandemic is not just a missed opportunity,” the researchers write. “It is a risk we can’t afford to repeat in future global health crises.”
Bit of ‘horse bolted and stable doors’ syndrome here.
I’m not sure about SAGE modelling but here’s my diary entry for 24 January 2020, just a week before the first confirmed case in the UK, which seems to indicate the the ‘headless chickens’ methodology was closer to the UK’s approach to what was to become the Covid-19 pandemic:
“In a tweet on 24 January 2020, that will go down as one of the classic understatements of the current era, Richard Horton, Editor of the Lancet with 106,200 followers accused the media of “escalating anxiety by talking of a ‘killer virus’. In truth, from what we currently know, 2019-nCoV has moderate transmissibility and relatively low pathogenicity. There is no reason to foster panic with exaggerated language.”
January 24th 2020 was just seven days before the first confirmed Covid case in the UK.
On 10th January 2020 I emailed friends in China from my home in England:
‘Great news about my ticket for 27 January.
Not sure if you can get this link so here’s the start of an article about a new virus:
“That Mystery Disease Outbreak in China Could Be Caused by a Newly Discovered Virus
“China believes a mysterious pneumonia outbreak that struck 59 people is caused by a new strain of virus from the same family as SARS, which killed hundreds of people more than a decade ago.
Lead scientist Xu Jianguo told the official Xinhua news agency that experts had “preliminarily determined” a new type of coronavirus was behind the outbreak, first confirmed on December 31 in Wuhan, a central Chinese city with a population of more than 11 million.
It initially sparked fears of a resurgence of highly contagious Sudden Acute Respiratory Syndrome (SARS), and prompted authorities in Hong Kong – badly hit by SARS in 2002-2003 – to take precautions, including stepping up the disinfection of trains and airplanes, and checks of passengers. (It coincided with Spring Festival holiday when it’s said one half of China travels to visit the other half.)
China has since ruled out a fresh outbreak of SARS, which killed 349 people in mainland China and another 299 in Hong Kong.
“A total of 15 positive results of the new type of coronavirus (later to be named Covid-19) had been detected” in the lab, through tests on infected blood samples and throat swabs, Xu said.
The World Health Organization (WHO) confirmed the preliminary discovery of a new coronavirus in a statement.
“Further investigations are also required to determine the source, modes of transmission, extent of infection and countermeasures implemented,” said Gauden Galea, the WHO Representative to China”.
I concluded:
“There’s more of the article but this is enough to give you the information you need.
So unless it’s an absolute emergency keep away from hospitals and crowded places”.
In other developments, the sun rose in East this morning. This is not to belittle this story, which is necessary and should be shoved down the throats of the millions of Covidians who terrorized everyone else.
Safetysim and fear porn trumped common sense and logic, not to mention the demonization of proven remedies. And. Nothing. Else. Will. Happen.
Ain’t that the sad truth….
I am not sure it was just that.
It was done on purpose to to brainwash people to obey authority unconditionally.
Plus it made many already rich, well connected people even richer.
I guess that is the question here: is it just that the „system“ is not working as well as it should or is there really a cabal of conspirators pulling all the strings with the explicit intention to brainwash people.
I am pretty sure it’s the first. We live in a very complicated society where people have many different incentives etc. And in most cases our society works well. We have greater levels of health, wealth and safety than any generation before us. But it sometimes things go wrong, as in this case.
But I don’t think there is a group of conspirators waking up every morning thinking about new ways to brainwash people. If someone already had the power to fake a worldwide pandemic they‘d surely think of something better to do with their power.
Whilst I agree with you that it’s unlikely that there is a cabal of people who are bent on explicitly brainwashing the population. I think that most members of the “elite” became aware of the power that lockdown concentrated into their hands. It wasn’t designed as such but once they opened the Pandora’s Box they were seduced by it. Power is the seductress here not design.
I find it quite unbelievable that such a complex and radical agenda could have been played out across numerous countries WITHOUT it having been planned in detail and managed closrly
Proven remedies like horse dewormer and the ingestion of diluted bleach?
For me the problem during Covid was that the other side with all their conspiracy theories and miracle cures was so far away from any kind of reason that I felt like the government line was still the only thing I really had to go for
…. both of which were significant misrepresentation of what was actually said
I believe the modellers were physicists?
Physicists cannot do statistics since they are used to having fantastically accurate mechanistic equations already in place.
With soft data only the simplest models with the fewest parameters are supportable.
You’re thinking of the Imperial College modellers. That team was headed up by Neil Ferguson, who is a physicist and certainly not an epidemiologist. He’s also an in inept computer programmer (again, no relevant qualifications) with a history of pandemic projections that could be best described as “not even wrong.”
The source code for the IC model is available on git hub for anyone who wants a laugh (or cry) at just how utterly useless the “experts” at the centre of these decisions really are.
Remember, SARS-CoV-2 is a seasonal respiratory virus but IC decided not to model seasonality and thus projected huge numbers of infections in July. This was dutifully observed by the government and used to continue most lock down policies over the summer.
Linked here is a code review of the infamous Imperial Model.
https://dailysceptic.org/2020/05/06/code-review-of-fergusons-model
Described as “Sim city but without the graphics”, it’s clearly a pile of steaming hot garbage that’s based on guessed variables and self fulfilling assumptions.
For all its pretentions at being a sophisticated model, it might as well have been a fancy spreadsheet, but that doesn’t sound grand enough for a so-called professor.
You may be referring to the person who took the lead on much of the modelling. Interestingly a search shows only one paper published in physics and 400+ papers since then in epidemiology.
Prof Fergusson admitted in interview with FT that he moved away from physics because he was never going to be seriously good at physics.
So maybe it is blessing in disguise that he was not modelling nuclear weapons or nuclear reactors?
“I believe the modellers were physicists?” It was even worse here in Scotland. Nicola Sturgeon took advice from an Edinburgh Uni prof., Devi Sridhar, who is an anthropologist (but ticked lots of woke boxes and said just what Sturgeon wanted to hear). So not surprising that outcomes were even worse in Scotland than in England.
You are quite right in everything you say about the bogus science that Nicoliar Sturgeon invoked in order to control the population of Scotland.
Yes, it really was illegal to travel between Edinburgh and Glasgow for a few months, as they were in different “zones” of her invention.
Credit where it’s due: Bojo refused to be bullied into increasing restrictions by Vallance and Whitty on both occasions.
Yes people forget about good side of Boris and that Labour was pushing for stricter and longer restrictions so public sector parasites can enjoy their extended vacations.
When travelling to HiFi show in Munich in May 2022 we had to wear FFP2 masks as minimum and my mate was nearly fined 80 Euros for having just paper one.
The CDC report that as of December this year, only 20% of adults had been vaccinated against the 2024/2025 variant.
I’m not a doctor, but surely such a low rate of vaccination is going to lead to an explosion in COVID numbers.
I didn’t realise the number of Covid-zombies was that high!
The irony of the mRNA jab is that not only is it highly dangerous, but those who take it are rendered more susceptible to Covid, not less.
No one is stopping you having vaccines.
What people like me object to is being forced to participate in untested and ineffective medical procedures.
You surely know by now that covid pseudo vaccines do not stop you getting covid and do not stop transmissions.
They have terrible side effects for many people (about one in 800), especially young, who are NOT susceptible to covid.
Is anyone shocked to hear that Covid, while a real and serious disease in some demographics, was wildly overestimated as a threat? It was understandable at first, when it was largely an unknown quantity, but as the truth became obvious, drunk-with-power authorities refused to back away from their enjoyable control of everything.
“The authorities” in this case largely being an alliance of journalists (BBC) and activists (e.g. Independent SAGE). They swept all before them in a climate of mass hysteria.
“Imperial College London’s influence as the World Health Organization’s sole collaborating centre for infectious disease modelling”.
Staggering that the WHO relied entirely on models produced by the man with with the worst track record in the business of pandemic forecasting (foot and mouth, swine flu, bird flu, SARS, MERS), Prof. Neil Ferguson. Not just wrong but by a country mile.
The man has a great deal to answer for, and not just breaking lockdown rules he was instrumental in imposing for a bit of rumpy-pumpy. The useless UK Covid inquiry will hold no-one to account, but ‘lessons will be learned’ for future pandemics for which management will be handed over to, God help us…..the WHO.
You only have to look at the number of major corporations funding Imperial to understand why they knew they could rely on it to provide the necessary data that would inspire the most fear. That Ferguson had previous in this are made no difference for they knew they could rely on the short memories of the vast majority and the compliance of the corporate MSM.
An incredible amount of building work across west London has been undertaken by Imperial in recent years which makes one realise that this college is a huge money making concern.
If I wanted ideologically inspired misrepresentation of data, I can’t think of a better place to get it.
ICL, along with LSE have been a hotbed of dangerous radicalism for decades
Modelling is meant to be a starting point, not a end point used to make broad policy decisions. We are suffering from a lot of terrible decisions made on the basis of modelling.
It’s true. But I am not quite sure what the alternative to modelling is.
The problem here was bad modelling. I do not know how else we should make policy decisions. We need something as a basis for policy decisions. What else do we have: gut feelings, reading tea leaves?
What else do we have ? A few basic principles of morality should be built in, like not preventing people visiting relatives in hospitals or care homes, or not msking it illegal to attend the funeral of a loved one if there were already another 3 people including a priest or minister.
Clearly v important lessons particularly given what we know now about the many costs. That said would one as a Policy decider lean on side of caution or take the chance risk overstated? And remember is without the benefit of hindsight. The quality of advice is obviously critical but it did not and could not determine all Policy decisions.
A couple of the Author comments on predictions don’t clarify if the modellers had taken sufficient account of impact of the Vaccine. Certainly the Dec 20 modelling had not, but it became fairly clear it was making quite a difference to the serious hospitalisation rate, if not the overall infection rate, as it was rolled out that winter. And it was the former that drove Lockdowns.
It makes sense to highlight the mistakes of the past but the more important question is how we move forward. How would we do better if the next pandemic strikes? How do we deal with any other public health or policy issue from drunk driving to smoking to air pollution to microplastics to power generation to pesticides to food additives?
It can’t be that we just all „do our own research“ on X and then everyone does as they please. No, we need strong institutions. The solution clearly has to be to make our institutions better rather than just moan about them. So how do we move forward from this?
What we’re seeing is that the internet is emerging as a genuine source of institutional criticism. It’s not everyone on X, but it turns out that there is a breadth of sufficient non-aligned ‘smart’ eyes looking at what the institutions say and do. They often have technical skills – eg in maths, statistics, engineering, logistics, technology etc – but not the subject-matter focus of the institution. And, it turns out, that because of their technical experience they are able to ask good questions, to do good independent analysis, and to raise pertinent issues that the institutions should consider or address.
The reaction of the institutions – the academic cathedral – so far, has been to try to shut out and demean these lay observers. But since this plays out in open public discussion it end up with obvious consequences when the lay observers turn out to have been asking the right questions, and the institutions have doubled-down instead of having listened. It’s these conversations that are making the institutions look like censorious fools playing for politics, not adjusting to valid criticisms, and thereby decreasing institutional trust.
Now, with Covid, I happen to believe it was a horrible situation because ultimately everything was going to be a guess. Likelihoods and action plans had to be estimated on very little firm data. Hindsight, now, might say there was a ‘right’ answer, but at the time there was a huge amount of uncertainty and, at the outset, a need to find anything that might have a beneficial effect. Some of those guesses would fail (hydroxychloroquine) or be excessive (hospital ships), or work better in theory than practice (masks). There was no prior experience to draw upon. Some of the guesses got lucky, as as health services learnt, those guesses got reinforced as being ‘right’. But you had to make a lot of guesses to find those that worked. The best institutions learnt and adapted. The worst, which may include the Covid forecasters and fringe commentators, doubled down on something already shown to be wrong.
Well duh!
If this was the private sector, this whole lot would have been sacked for gross incompetence. Instead, they get gongs and then are allowed to mess up again and again.
Good luck DOGE, I hope you wipe the lot out.
The Covid decisions were predicated on the opinions of a few dedicated advisors who doubled down and pitched worst case scenarios made even worse by poor forecast models that nobody questioned. It was obvious that Covid had low morbidity in age groups less than 50. But we ploughed on with doom laden forecasts generated by Imperial College modelled by academics whose previous efforts were vastly out of line.
It should be remembered that Boris Johnson went against ALL the establishment opinion including the Labour Party, BBC and SAGE when he lifted lockdowns entirely.
Do we have the same scenario with CO2 emissions? A case of Groupthink?
The Left’s public health wing early-on recognized the disease’s potential to boost Progressivism government’s role in not just encouraging a healthy populace but in essentially managing if not outright mandating it – precisely in line with the Left’s Daddy-Government fantasies. This goes a long way in explaining the persistent exaggeration of the virus’s threat to everyone, not just certain well-defined and limited demographic groups. Within six months of Covid’s identification, it was turned into a naked power grab, which people quickly recognized and rejected – which was why social networks had to be scrubbed of all dissent. https://www.wsj.com/articles/covid-worsened-america-rage-virus-for-which-theres-no-vaccine-lockdown-vaccine-mandates-ron-desantis-stanford-masking-2670cd39?st=yyPHRk&reflink=desktopwebshare_permalink
From what I remember, SAGE’s response to their wild overprediction of hospitalizations from the Omicron variant was that they were tasked with providing scenarios that “supported particular policy options”.
Yes, policy-based evidence-making.
DUH!!!!!
Everything about Covid was wildly wrong or flagrantly bogus, from the “New Normal” onwards. The statistics were blatantly rigged.