March 10, 2021 - 11:36am

I’ve been assessing medical tests for thirty years, and Covid testing is no different — it’s always a game of probabilities. The current mass testing regime for children in schools simply doesn’t pass by the numbers.

The fitness for purpose is a combination of the sensitivity of the test (what percentage of infected cases it correctly identifies) and the specificity (what percentage of uninfected people it correctly says are negative). But it also depends on how, where and when a test is used.

Different studies of the UK Innova lateral flow test (the test being used in schools) have reported variously that its sensitivity is 78%, 58%, 40% and even 3%. The higher 78% and 58% figures come from using the test among people with symptoms, the lower 40% and 3% figures come from using it for mass testing among people without symptoms, as is being done in schools. (And none of these studies have assessed how well the test detects infection in children). So although the test can pick up people who have the infection, it will miss quite a few — so there is a risk that disinhibition after a negative test could actually exacerbate case numbers if children incorrectly think they are safe and the rules no longer apply.

But the more concerning aspect are the false positives, related to the specificity. The original Government studies found only around 3 in 1000 people were getting false positives, and this dropped to 1 in 1000 in the Liverpool study. Doesn’t sound like a lot, right?

But consider the problem from the perspective of a pupil who has just got a positive test result. The reasonable question for them (and their parents) to ask is “what are the chances that this is a false positive?” Given that a positive test result means the pupil, their family and their school bubble will have to isolate for 10 days, a high false positive probability is a real problem.

The answer to that pupil’s question depends on the prevalence of the disease and the accuracy of the test — let’s consider three scenarios.

PROBABILITY OF FALSE POSITIVES IN THREE SCENARIOS

Data: Jon Deeks

Where 1 in 100 pupils have the infection (Scenario A), by testing a million we would find 5,000 cases but get 990 false positives. This ratio of true to false positives is quite favourable — 5 out of every 6 with positive results actually would have Covid-19 infection — so the probability that the pupil genuinely has the infection is over 80%.

However, the picture becomes less favourable as the infection becomes rarer: if only 1 in 1,000 pupils were infected (Scenario B) we would detect 500 cases but get 999 false positives. The ratio of true to false positives is now unfavourable – one true result for every two false results.

If only 1 in 10,000 had the infection there would be one true result for every 20 false results (Scenario C). Why would anybody consent to a test where the chances that a positive result is wrong are so much higher than it is right? This isn’t the fault of the test — it’s the application in a low prevalence setting. Using any test — even one with an incredibly high specificity — will lead to more false than true positive results when the disease becomes rare.

Test-and-trace have been publishing weekly figures for lateral flow tests done in schools. The prevalence of positive results has dropped — at the beginning of the term it was 0.3%, but last week’s data (up to the 24th Feb) was down to 0.07% (1 in 1500) based on observing only 189 positive results from 288,958 tests. This incredibly low positivity rate is a concern. Clearly it indicates that cases must be rare, but it also raises concerns that the test may be missing more cases in children than it has in adults. It seems very likely we are in the zone where false positives may considerably outnumber true positives.

The solution is simple: children and teachers who get positive lateral flow test results should all get confirmatory PCR tests that determine whether or not they need to continue to isolate — including if the test was done at school.

If the specificity of the PCR test is the same as an LFT test (a conservative assumption) only one in a million would now get a false positive result —whether prevalence is 1 in 100 or 1 in 10,000. So it all but removes the false positive problem.

But would this measure risk more genuinely infectious pupils falling through the net? Recent data suggests that PCR may miss 10% of the more infectious cases. Based on this figure, and assuming a prevalence of infection of 1 in 1000, confirming with PCR would avoid 20 families and bubbles being wrongly isolated for 10 days for every additional Covid infection missed. (If confirmation was done using 2 PCR tests rather than one, the missed cases would reduce to 1 for every 200 false positives avoided).

This is a policy judgement, but without proper confirmatory testing the Government is asking us to consent to children being tested in a way where (a) negative tests have probably only about halve the chances of having Covid-19 so do not effectively make schools safe and (b) positive test results give us somewhere between a 70% and 95% chance of families and bubbles being unnecessarily put into isolation for 10 days. It is not an attractive proposal.

Jon Deeks is Professor of Biostatistics in the Institute of Applied Health Research at the University of Birmingham.

Jon leads the international Cochrane COVID-19 Diagnostic Test Accuracy Reviews team summarising the evidence of the accuracy of tests for Covid-19; he is a member of the Royal Statistical Society (RSS) Covid-19 taskforce steering group; co-chair of the RSS Diagnostic Test Advisory Group; a consultant adviser to the WHO Essential Diagnostic List; and Chief Statistical Advisor to the BMJ.


Jon Deeks is Professor of Biostatistics in the Institute of Applied Health Research at the University of Birmingham.