A couple of weeks ago, the US FDA gave the Ebola drug remdesivir an emergency licence to be used to treat Covid-19. It found that patients got better 31% faster with it than with a placebo. Sounds good, right? Well: perhaps not.
Imagine you’re doing a study. Say you want to find out whether teaching outdoors in all weather improves children’s school results. You take 60 children, and you randomly assign them to two classes, “outside”, and “inside”, a control group. You decide you’re going to look at their GCSE results at the end and see who does better.
At the end of the year, they’re pretty similar. You’re disappointed. But you’ve also measured a bunch of other things: percentage of homework assignments handed in; teacher feedback on behaviour; attention span, whatever. You go through all those, and you notice that on one measure — say, pupils’ reported satisfaction with class — the outside class does noticeably better.
The effect is “statistically significant”: that is, you’d expect to see a result that big by chance less than one time in 20 (written as “p<0.05”). So you hand in your report to the Department for Education saying “children enjoy class more if they have lessons outside”. Is that right?
Well: you don’t know. You divided your children up at random, but the classes could be different. If one happened to have more of the bright kids, then you might find that one did better at GCSE results. If one of them had the goodie-goodies, you might find it did better on behaviour. And if one of them happened to have more cheerful types, you might find they enjoyed the class more. And it would have nothing to do with whether you taught them outside or inside.
What’s more, crucially, the more things you look at, the more likely you are to get that one-in-20 coincidence and get a “statistically significant” result from chance. This is called “outcome switching”, and it’s best illustrated by this XKCD comic.
According to the Oxford Centre for Evidence-Based Medicine, exactly this sort of outcome switching has gone on with remdesivir. Initially, the study was going to look at mortality and the percentage of patients who needed ventilators. But 13 days before the study’s release, they changed it, and started looking at how long it took patients to recover. There were 27 more outcomes they looked at; they only reported time to recovery and one other “treatment-related improvements”. There was no improvement on the two outcomes they were initially planning to use, including mortality.
The remdesivir paper is not yet available, so we can’t assess it. Maybe they had very good reasons for switching outcomes. And they report a very significant result — p<0.001, not just p<0.05.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeNot only does the lack of scrutiny risk wasting resources but there are also the side effects to consider.
Spot-on. If you look for more than 20 outcomes, by chance one of them will give you the results you want…
$4,300 per dose? Hydroxychloroquine costs 63 cents a tablet, actually works, and is being mass-produced and used all over the world. So, the choice is obvious! Remdesivir!
Oh, wait…
By the way, it’s not really an “Ebola drug” since it’s pretty useless against that too.
Except it doesn’t work.
https://www.bmj.com/content…
This article is so very useful, I did think reducing treatment time by a few days was not that significant but had not thought that probably the effect on mortality had been deliberately not used. This needs to be explained to WHO and others who in good faith waste public money
I was aware that drug companies did not always behave ethically, sometimes buying up research results that were unfavourable, In the middle of a world wide pandemic I am saddened that this can happen.