October 26, 2020

Here’s an ethical dilemma for you: You have tested positive for Covid-19, so obviously you’re going to stay home for a fortnight to avoid infecting others. But what do you do about people you spent time with recently? Do you tell them to go get a test in case you infected them? Or do you give their details to NHS Test and Trace, who will contact them and tell them somebody (they won’t say who) may have infected them, and they should stay home for a fortnight?

If they don’t, they may be fined £1,000 — like the woman in Cumbria who got a call from Test and Trace telling her to self-isolate, and within half an hour was in a cab to a bar. Test and Trace did their job, traced her movements, and gave her details to the police, who issued her with a fine.

Now, you may think (as I do) that this whole pandemic would go a lot easier if people who were actually infectious stopped going out to infect people. But you may also be squeamish about lining up your mates for police attention and a fine, especially since the police were given access to Test and Trace data on people who should be self-isolating.

It was a move greeted with much face-palming by Public Health teams and doctors, many of whom have dealt with these situations before (apologies to anyone who’s getting flashbacks to youthful encounters with Sexual Health clinics, by the way) and know that confidentially and trust are very important if we want these systems to work.

We often think of ethical issues around data as being a new problem arising from digital technology, but who gets to collect and use data, and the consequences, are problems as old as writing. It’s not about technology, it’s about trust.

In fact, this might be the time to consider downloading the NHS COVID App, if you haven’t already. You don’t need to worry about who gets that information, thanks to the decentralised system that keeps all your data on your own phone unless you decide to share it.

That system wasn’t the first choice of NHS Test and Trace (and the Department of Health and Social Care). They would have preferred a centralised design that gave them more data on who was potentially passing the virus to whom, and where. However, they may find more people willing to use the anonymised technology than to be open with a human-run system that could tell the police their test results.

This is an unexpected example of Covid-19 accelerating the importance of technology in healthcare, perhaps, but it’s not the only one. Digital technology has been quietly gaining importance in healthcare for years.

Medical research depends increasingly on access to large quantities of data, on combining diverse data from different sources, and on using programmes loosely described as AI (artificial intelligence) or Machine Learning to turn the data into useful information. To supply researchers with this information, engineers are ingeniously putting sensors and transmitters into forms that can unobtrusively monitor patients as they go about their lives. This also has short-term benefits for the patients, who get continuous, real-time attention, albeit from software.

You can now wear sensors that monitor not just heart rate and temperature, but blood pressure, blood sugar and blood oxygen levels. Commercial devices like fitbits and Apple watches collect health data and (with your consent) share it with medical professionals. The US Defense Department is using such off-the-shelf devices to monitor personnel for physiological changes that could be early warning signs of Covid-19 infection, using 165 biomarkers to alert them before symptoms emerge.

In fact, you don’t even have to wear a device. Researchers at the Harbin Institute of Technology in China can print sensor circuits directly onto your skin, or you could swallow a pill-sized sensor that will draw power from your stomach acids to monitor temperature, heart rhythms or the chemistry of your digestive system.

Because big data techniques can capture weak signals from large quantities of data, early warning signs of common diseases may be detectable in the patterns of everyday life. Researchers into Alzheimer’s and Parkinson’s disease are looking for clues in gait, in typing behaviour, in speech patterns and eye movements, that could be picked up by the devices we already use, before any symptoms emerge.

Even population-scale patterns of internet use are being harnessed to predict the pandemic’s movements. Mayo Clinic researchers found that frequency of searches like “Coronavirus symptoms” or “loss of smell” on Google preceded rising cases in an area by around a fortnight.

This is the kind of pattern-spotting at which AI and Machine Learning excel, outperforming humans at certain, specific tasks. Algorithms analysing brain scans and X-Rays often spot abnormalities missed by humans; a Stanford University-designed algorithm performed as well as dermatologists in sorting skin lesions into benign and potentially cancerous.

Several hospital trusts were already experimenting with an AI-based triage system to reduce pressure on their Accident and Emergency (A&E) departments in 2019. Health tech company Babylon’s Ask A&E service uses a series of questions to sort patients into those who really need to go to hospital, those who could instead have a remote consultation, and those who just need over-the-counter remedies. Now Ask A&E is being used to reach patients without bringing them into hospital, and to identify Covid-19 cases via a specific triage route.

Babylon’s symptom-checking algorithm has been criticised for inaccuracy in the past, with fears that patients might be offered wrong diagnoses and fail to seek life-saving treatment. The current program relies on association, identifying which symptoms are most often correlated with which ailments. Now Babylon are developing a counterfactual diagnostic algorithm, designed to go beyond correlation and infer likely causes. Published research claims the new algorithm outperforms real doctors on average.

People who enter their data into Babylon’s system can also opt in for future research programmes, in partnership with universities or NHS researchers. The NHS has long recognised what a valuable asset it has, in the form of comprehensive patient records for an entire population — valuable to researchers, and hence valuable in financial terms.

But patients feel a sense of ownership over their own data, and have some doubts about donating it for the use of commercial companies. Recent research carried out by the Wellcome Trust found that two-thirds of people were unaware that the NHS gives other organisations access to their data. Those surveyed felt strongly that use of their data should result in better health outcomes, though they recognised that might include administrative uses, and even revenue for the NHS.

At the moment, as we hope that public-private partnerships will bring a Covid-19 vaccine to end the pandemic, we may feel exceptionally willing to share our data for research. If thousands are willing to volunteer their bodies for clinical trials, why should we be squeamish about our anonymised data? And, when surveyed, we are very willing to altruistically donate our data for medical research, even before Covid-19 grabbed our attention.

However, public willingness to trust research organisations with our data is not unconditional. It depends strongly on how competent we think they are, and on the purposes for which they use it.

Like me, over 4 million people use the COVID Symptom Study app, the creation of private company ZOE and King’s College London. It’s completely transparent about using our data for research, and sharing it with others. By gathering so much data on symptoms, test results, and other relevant details, the scientists behind it were able to refine their understanding of the disease, and spot higher-risk groups and regional trends in infection.

But health data can be used for other purposes: insurance companies, for example, to predict what kind of people are more likely to make claims. Public health zealots might find correlations between health outcomes and dietary preferences, and then go on to campaign against your favourite fast-food outlets.

Which brings us back to the potential consequences of sharing your health information with the Test and Trace system. On the positive side: preventing the further spread of coronavirus. On the negative: the choice between losing work, money, possibly your job, and unwanted police attention — with a large fine.

Lacking confidence in the competence of the system (too many rows for an excel spreadsheet, anyone?) or in the Government’s willingness to support you (almost no chance you’ll get enough money to cover a fortnight’s earnings, even if you keep your job) it’s understandable that many people will choose not to participate in the system. Understandable and, on a society-wide scale, disastrous.

This Government seems obsessed with building large-scale technological solutions to social and medical problems. There is nothing wrong with harnessing technology, and the UK certainly has some catching up to do in many fields. But even the most brilliant and efficient system can’t function without trust, and you can’t automate that, any more than you can program hope or mechanise social solidarity. Matt Hancock and his team need to stop building databases, and start rebuilding public trust.