March 23, 2021   8 mins

Seven million doses of a safe, life-saving vaccine are sitting in warehouses in the US. The Food and Drug Administration (FDA) has not issued a licence for the use of the Oxford-AstraZeneca vaccine, despite about 15 million people having been given it in the UK with no detectable increase in risk. Mercifully, they have agreed to send them to Canada and Mexico, so they won’t go to waste, but people who could have been vaccinated by now have not been, and some of them will probably die as a result.

There’s more. The Moderna vaccine was ready to go last January, within a couple of days of the SARS-Cov2 viral genome being sequenced. But it was not approved in the US until December, after months of testing and regulatory caution. If it had been approved instantly, many hundreds of thousands of lives would have been saved. It could all have been over a long time ago.

But, on the other hand, the history of vaccines is not entirely unblemished. In 1955, a batch of polio vaccine caused thousands of cases of polio, hundreds of cases of paralysis, and 10 children’s deaths. If people had been more cautious then, those lives would have been saved – and perhaps people would be less nervous about vaccines generally. Certainly the incident raised vaccine fears.

Which of these is the lesson we need to draw? Are we too cautious, too risk averse; or is it wise to be risk averse when it comes to vaccines, to avoid driving public panic with rare but real disasters?

A year ago today, on 23 March 2020, we went into lockdown. In the past 12 months, we’ve come to understand a lot of technical concepts – infection fatality rates, vaccine efficacy rates, exponential curves. But the most significant thing is that we have all had to become utilitarians. We all now think in terms of QALYs, utils, the greatest good for the greatest number.

As soon as lockdown began, there were fights over whether it did more harm than good. Both advocates and sceptics of lockdown argued in the same terms. The economic damage would undoubtedly ruin livelihoods; but would it do more harm, as measured in quality-adjusted life years (QALYs), than letting the virus run unchecked?

The argument over lockdown was a version of the famous trolley problem: if an out-of-control train is hurtling down a track towards five rail workers, should you divert it onto another track, where only one person is working? To hugely oversimplify, the utilitarian answer is yes: you save five lives for the price of one.

But not all ethicists are utilitarians. To again oversimplify, deontologists – Immanuel Kant is the best-known example – say you should follow certain rules: some things are always wrong, whatever the consequences. It is generally (though not universally) accepted that Kant would not pull the lever. He would say it is wrong to act to kill the one person, even though it would end up saving four lives on net.

The great revelation of Covid-19 is that we are all consequentialists, or all claim to be. With lockdown, the argument wasn’t usually over whether it was simply wrong to kill people with lockdown but permissible to let them die from Covid. It was whether lockdown saved more QALYs than it cost. We’re not arguing whether it’s wrong to kill one to save five – we’re arguing about the number of people on each track.

Last week, several countries suspended use of the Oxford-AstraZeneca vaccine because of fears over blood clots. I think it was a terrible decision.

For that matter, so does the European Medicines Agency. It says that whether or not the vaccine causes some small number of clots (which it may do, in a subset of people), the correct decision is to carry on vaccinating people with it as quickly as possible. “[T]he vaccine’s proven efficacy in preventing hospitalisation and death from COVID-19 outweighs the extremely small likelihood of developing DIC or CVST,” it said in its statement. (It’s worth noting that the US trial has now announced its results, which were very positive, and had zero cases of CVST in its 21,000 participants.)

Again: this is stark utilitarian language. If we pause the delivery of the Oxford/AZ vaccine, we will prevent X deaths from blood clots. But it will also lead to Y deaths from Covid because fewer people are vaccinated. If X is smaller than Y, then vaccinating is a good idea. We’re not arguing over whether it is wrong to kill a small number by giving them a vaccine, even if it saves lives; we’re arguing over how many we would save. Again, we’re not arguing about whether we should pull the lever, we’re arguing about how many people are on the track.

(Of course, X could well be zero, and I think it’s very possible that it is. Certainly, X is much, much smaller than Y. But the principle is unchanged.)

Utilitarianism is absolutely how we should be thinking. For one thing, it’s not clear to me that the distinction between killing someone through action and allowing them to die through inaction is even coherent. If I am driving, and a child runs out in front of me, and I don’t press the brake – is that act, or omission? Or, for that matter, is stopping the vaccine rollout the act, or is continuing the vaccine rollout?

The philosopher Jonathan Bennett argues that things should count as an “action” if, of all the bodily movements I could have taken, only a small fraction would lead to this outcome. So pushing a car down a hill to a cliff would count, because I could have done millions of things – tap-danced, written a sonnet, played volleyball – and almost none of them would have made it roll down a cliff. But not stopping a car when it is already rolling towards a cliff would not, because if I’d tap-danced near it, the car would have rolled over anyway; only a small subset of actions, like putting a big rock in front of it, would have stopped it. Maybe that works as a distinction, but I don’t know if it answers my “not pressing the brake” question.

What’s interesting, though, is that true utilitarianism is extremely complex. In the pandemic, mercifully, it’s often been pretty straightforward to work out which of two actions is better. Lockdown almost certainly saved more lives than it cost, because 1) loads and loads of people were dying and 2) if we hadn’t locked down, the economy would have taken a huge hit anyway (as Sweden’s did, almost as much as its neighbour Denmark, despite the lack of a lockdown) because of all the people too scared to go to the shops or the office while a deadly pandemic was raging. Certainly it’s not the case that countries which controlled the virus better had worse economic outputs.

Similarly, rolling out the Ox/AZ vaccine despite some small risk of a small number of clotting deaths is almost certainly a good idea because 1) the vaccine will save so many lives and 2) it turns out that stopping the rollout scared people anyway. It’s not a finely balanced situation. The utilitarian calculus is fairly straightforward.

But this is by no means always the case. Eliezer Yudkowsky is an AI theorist and arch-utilitarian, who argues that a sufficiently enormous number of people getting dust specks in their eye is worse than one person being tortured for 50 years. When I asked him a rather stupid question about utilitarianism, he said that “most people” (he did not say “especially you, Tom Chivers” but it was, I felt, heavily implied) were not “smart enough” to operate on a utilitarian basis, because the real consequences of any given action are so complex and unknowable.

Here’s what I mean by that. If I leave the house and cross the road to go to the shop, I will slightly change the flow of the traffic. Some cars that would have gone through a green light will now have to stop at the red. That small change will slowly affect the entire road network, like the butterfly flapping its wings and causing a thunderstorm. Eventually it will mean that cars which would have been in fatal crashes will now not, while cars that wouldn’t have been now will. Some people will live who would otherwise have died, and vice versa. I can’t possibly predict the outcomes of even a tiny, inconsequential act like going to the shop.

So instead of trying to predict the outcomes of each action in real time, we ought to behave as quasi-Kantians – to establish rules to follow, and then follow those rules: if we try to compute the best outcomes on the fly, it’ll work out worse, because we’re not very good at it. (For one thing, you might end up sitting in your house, alone, unable to go to the shop in case you kill thousands of strangers.)

For instance, go back to X and Y above. If X is smaller than Y – if fewer people will die from blood clots than would be saved by the vaccine – then we might say that the vaccine is a good idea.

But those aren’t the only numbers involved. What happens if a few people die from blood clots, and that bad publicity scares others into not taking the vaccine? Does that mean we should stop the rollout? Or, as it in fact turns out, would stopping the rollout for minor fears scare more people? Those stark X and Y figures are only the surface froth: computing the real “consequences” of any action is phenomenally difficult.

In non-pandemic times, we seem to have a simple rule for this: when a vaccine is linked to some negative impact, we suspend its use and investigate it. Concerns (baseless ones, as it turned out) about mercury in the hepatitis B vaccine led US regulators to stop using it in 1999. Australian concerns over febrile convulsions in children led them to suspend the use of the influenza vaccine in under-fives in 2010.

Is this the right rule? Not all vaccine fears are baseless: remember the polio vaccine deaths above? But that was an old form of vaccine, made by deactivating a live virus; obviously the virus had not been sufficiently deactivated. Modern mRNA, protein subunit or viral vector vaccines are not biologically capable of causing the infection they are meant to protect against, and the process for deactivating viruses when we use them in vaccines nowadays has been vastly improved. And, also, for the vaccine to have reached the public, it must have been through Phase III testing in tens of thousands of people; hundreds or thousands of person-years of testing.

And following that rule has led to some pretty negative outcomes. Suspending the Ox/AZ vaccine, as we saw above, does not seem to have boosted public confidence – rather the opposite. Similarly, the suspensions of the US hepatitis vaccine and the Australian flu vaccine seem to have been associated in both cases with a dramatic reduction in vaccine uptake. The public doesn’t hear “We’re just checking this vaccine is safe, to reassure you;” they hear “This vaccine is dangerous, and we’re taking it offline.”

In the wider world of medicine, regulatory institutions such as the MHRA, EMA or FDA have a rule in place that says something like “You need to do years of incredibly rigorous trials on new drugs, to prove safety and efficacy to some P<0.001 level, before we will roll it out to the public.” That’s because even if speeding up the process would likely do more good than harm, it only takes a few thalidomide-type disasters to damage public confidence.

That makes sense – even in the pandemic, there were lots of very confident people telling us that some drug or another worked; sometimes they were right, and it was tocilizumab, but sometimes they were wrong, and it was hydroxychloroquine. But this need for perfect data has been far too widely applied, and led us to avoid much lower-cost and easily reversible interventions, such as masks, border closures or a ban on mass gatherings like Cheltenham Festival, for far too long.

We’re all utilitarians now, as I say. The pandemic has forced us to be. But the tricky bit will be learning when to be the short-term utilitarian, working out the pros and cons of the individual case, and when to be the quasi-Kantian rule-follower.

Obviously pandemic time differs from peacetime, but I think we should learn to be a bit less rigid in some of our rules. The panicked need to suspend vaccines at the first sign of danger, for instance, can be relaxed somewhat. Modern vaccines are amazingly safe. In any case, when we shut down vaccine programmes out of an overabundance of caution, in the hope of reassuring the public, it seems to have the exact opposite effect.

I’m not saying we should have immediately rolled out the Moderna vaccine the moment it was available, or that we should have carried on giving a lethal polio vaccine to children in order to prevent vaccine hesitancy. But it is ridiculous that Americans still can’t take the Oxford/AstraZeneca vaccine, that it is sitting unused in warehouses even as it is already saving thousands of lives elsewhere; that whole nations have suspended its use in a doomed attempt to reassure the public of something they didn’t need reassuring about. Perhaps, after a year of this pandemic, we finally ought to realise that when it comes to vaccines, the real danger is in being too cautious.

Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.