A couple of things happened in the past week that will have been of huge interest to a lot of people, for obvious reasons. One, a study on the impact of hydroxychloroquine, an antimalarial drug, on Covid-19, was retracted. Two, another study, looking at the effect of the Ebola drug remdesivir on the virus was shown to have an extremely basic statistical error.
Those two drugs and their potential for treating the Covid-19 have been in the news a lot recently. Hydroxychloroquine in particular has received a lot of attention, because one Donald J Trump has apparently been taking it. The US president had tweeted in March about a different study, headed by the French scientist and Asterix character lookalike Didier Raoult, stating (capitalisation his):
“HYDROXYCHLOROQUINE & AZITHROMYCIN, taken together, have a real chance to be one of the biggest game changers in the history of medicine … Hopefully they will BOTH (H works better with A, International Journal of Antimicrobial Agents) be put in use IMMEDIATELY. PEOPLE ARE DYING, MOVE FAST, and GOD BLESS EVERYONE!”
That study, by the way, contained the same basic statistical error as the remdesivir one. That error, for those interested: they divided people into two groups, one who’d taken the drug and one control group. Then they looked at whether people’s conditions improved after a certain time.
But for some reason, both studies took people who’d died out of their results: one in the hydroxychloroquine study (and three people who ended up in the ICU); seven in the remdesivir one. “Dying” is pretty obviously “not improving” — but because those deaths were removed (and because the deaths fell disproportionately among the treatment group rather than the control), it made the drugs’ results look better than they were.
That wasn’t the only problem with the two. They were also tiny (remdesivir study, 61 subjects; hydroxychloroquine study, 42 subjects, 16 of whom were controls) and, in the case of the latter, not randomised — the control group was some people in another hospital, and some patients who’d refused to be given the drug. This is all on top of an earlier remdesivir study that seems to have changed what it was looking for half-way through, a tried and tested way of making unsuccessful drug trials look successful.
We’re living in a strange and frightening time, and we’re relying heavily on medical science to save us from it. Vaccine trials, drug trials; antibody testing to discover how many people have had it and therefore how deadly it is; studies on whether children are likely to get or pass on the disease, to see if it’s safe to reopen schools. We desperately need good science done quickly.
And science has changed, as a result. Studies are coming out faster. “Preprint” papers, studies which have been carried out and posted by the authors to an open-access server like ArXiv, PsyArXiv or MedRxiv, have been around for years, and had been becoming more popular even before Covid-19. They’re a good idea, in many ways: they allow other scientists to examine studies and check them for problems ahead of official publication, in a way that the standard peer-review system doesn’t.
But they are, as that implies, not peer-reviewed. Peer review is certainly far from perfect — the remdesivir study I mentioned in the first paragraph was peer-reviewed and still the statistical error got through; and there are major systemic problems with it which are too complex to go into here. But it does act as a basic sense check. Preprints are a useful system, but they can’t replace peer-review entirely.
I’m used to seeing press releases and expert comments appear in my inbox about new scientific papers. Usually, the bulk of them are embargoed: big red letters at the top of the email saying “please do not publish before 08:00 BST 24 May” or whatever. Now, the large majority of them are preprints, available immediately. Everything is happening faster.
That is fine, and as it should be — and as a science writer, it’s hard to resist. I love a preprint as much as the next deadline-pressed journalist with no institutional journal access. (See?) But, in the end, as XKCD reminds us, an un-peer-reviewed study could also be described as “a PDF”. Some contain a vital contribution to scientific knowledge; some are worse-than-useless garbage. (That is also true of peer-reviewed studies, I should admit; but slightly less so.) It requires some skill to tell which is which, or who to trust.
Since peer-reviewed papers are being rushed through, too, it means that there is a lot more noise in the system than there usually is; a lot more garbage. That’s not anyone’s fault, it’s inherent in trying to do things quickly. (“Cheap, fast, good. Choose two.”)
And over the last few months, we’ve seen an awful lot of things catch people’s attention: the apparent link between a country’s Covid-19 outcomes and BCG vaccination; the use of plasma from recovered patients’ blood as a treatment; the idea that smoking offers protection (or very much doesn’t). Zinc, vitamin D. Many, perhaps most, of these will turn out to be nonsense.
You might — and some do — say: so what? People are dying in hospitals; who cares whether the remdesivir trials are imperfect, let’s throw it at people and see what happens. We’re in a burning building, we might as well use this bedsheet as a parachute and see what happens.
But we’re not in a burning building. Covid-19 is terrible and frightening, but it’s not a death sentence for most people; in London, it seems that 17% of people have had the disease, which works out as an infection fatality rate of about 0.5% — one person in every 200 dying. It could be higher in reality; it’s a complicated thing to measure. But that’s probably not too far off. Most people who get it will get better.
And drugs are not harmless. According to the FDA, remdesivir’s side effects include liver damage; hydroxychloroquine has this list of possible unwanted outcomes. Some drugs actively worsen people’s chances of survival. If we start giving these drugs to people, there’s a strong possibility that we’ll end up killing a few of them.
More than that: there’s this thing that the psychologist and serial debunker of bad science Nick Brown, calls “scientific hysteresis”. Hysteresis is when something gets pushed out of shape by something and then doesn’t fully reform to its original position.
The idea of scientific hysteresis is that when a scientific idea — vaccines cause autism, say, or power-posing makes you confident — gains traction, it changes the shape of public opinion. Now, more people believe that thing than previously did.
If those ideas are debunked, as both those examples essentially have been, there is no reason to believe them any more. The original reason has been removed. But often, public opinion does not reshape properly; still, there’s a lingering deformation. People still believe that vaccines cause autism or that standing with your legs apart will get you a pay rise; some of them might not believe it as strongly, but there’s a lingering “wasn’t there…” feeling. This phenomenon, under the name “canonisation of false facts”, appears to be real, and it’s a real problem in psychology and other disciplines where many purported facts are being overturned by the replication crisis.
This is the danger of we in the media, or the public, or any presidents who happen to be passing, leaping on studies into hydroxychloroquine or remdesivir or convalescent plasma and holding them up as saviours. Most of them probably won’t be, but there will still be thousands of people who think they are, and pressure on doctors and regulators to provide them.
I can’t see a way of getting science done quickly without lots of these sorts of mistakes being made; I suppose the only thing I can do is call for the media to be extra careful about reporting on interesting new treatments, and to really stress the uncertainties and possible mistakes.
If you want a salutary lesson in all this, last week, a huge and much better study — 96,000 patients, almost 15,000 of whom received hydroxychloroquine or chloroquine — was published by the Lancet. It wasn’t a full randomised controlled trial, but it used the 81,000 subjects who received other treatments as a control.
(For the record, that means there were pre-existing differences between the intervention group and the control, making the results harder to be confident in, although the researchers did their best to account for those differences statistically.)
The study found that the subjects given the drug not only did no better than those who weren’t — if I’m reading the study right, they were several times more likely to suffer from cardiac arrhythmia — and, more importantly, about 30% more likely to die.
Fast science is happening. It has to happen. But the rest of us (especially those of us who make science public and well-known) need to be extra careful to make it clear that fast science is not the same thing as good science, because sometimes people will die.
ADDENDUM: I said a few posts ago that I was going to try to make more falsifiable predictions, but I haven’t managed it since, so I’m going to get back on the horse.
- By 20 May 2021, at least one systematic review published in a top-50 impact factor journal looking at randomised controlled trials into remdesivir for Covid-19 will have found some benefit of mortality (55% confidence)
- By 20 May 2021, no systematic reviews of randomised controlled trials into hydroxychloroquine published in top-50 impact factor journals will have found any benefit of mortality (60% confidence)
I gather from a superforecaster friend that the time you put into forecasting is one of the strongest predictors for how well you do, so having bashed these out in two minutes, I expect them to be dreadful. Hence the low confidence.