Is Apple defending our privacy or enabling its erosion?(Sheldon Cooper/SOPA Images/LightRocket via Getty Images)


January 31, 2022   6 mins

As soon as I turn on my phone, it becomes a node in a network, giving me access to the entire world. But it also gives Apple access to information about me and my behaviour; I become another source in their vast banks of data.

So, as Stephanie Hare rightly points out in Technology Is Not Neutral, it’s never just an object. Technology is as much social as scientific, as much economics as engineering. There’s little point in Silicon Valley companies hiring an ethicist to decide a product’s value — its effects go far beyond the shiny thing that comes out of the box.

Apple, for instance, might not have put AirTags on the market in their present form if they had paused to wonder whether a coin-sized tracking device — ideal for keeping track of keys, luggage or children’s toys — could also be ideal for stalkers to electronically track an unsuspecting person. Now they are forced to come up with solutions, most of which rely on the tracked person also having a smartphone.

But, precisely because technology is a social phenomenon, embedding ethics, and ethicists, in its development will never remedy its dangers. Take Facial Recognition Technology, for example. It’s an issue Hare has been actively — and critically — researching for some time. Like other biometric technology, it has been introduced in practice with remarkably little scrutiny, regulation, or debate.

Today, with ubiquitous CCTV cameras and widespread availability of facial recognition software, British police forces are using it to pick out individuals in public spaces, with AI programmes that match faces seen on camera to faces in databases. But private spaces like shops, museums, casinos and arts centres were using Facial Recognition Technology first. Some cities and states in the US have moved to forbid or control its use, but in the UK there is no legal framework to prevent somebody from matching your face to your identity, and other data about your life.

How did we end up here, unable to reliably go about our lives anonymously in public spaces? Just as technology is inseparable from wider social structures, “tech ethics” can’t be seen as merely part of the technology development process. They make sense only in the context of wider social and moral frameworks.

It’s entirely possible that the designers of the Facial Recognition systems looked at the millions of photographs that people upload to social media every day, tagging themselves, their friends and families, and couldn’t see anything wrong with gathering all that information into one handy index. True, it could result in specific harms, like stalkers finding their victims, or innocent people being unjustly targeted by the police. But those could be relegated to the “unforeseen consequences” of a generally desirable change.

Yet the indifference of governments, businesses, and many individuals, to privacy is a much deeper problem than a few irresponsible tech bros. As the recent pandemic years have shown, the desire to be free from scrutiny unless there’s a good reason to be scrutinised is widely seen as, at best, eccentric and, at worst, automatic grounds for suspicion.

We simply can’t articulate why a private life is valuable. We have no sense of ourselves as autonomous beings, persons who need a space in which to reflect, to share thoughts with a few others, before venturing into public space with words and actions that we feel ready to defend.

Few question the desirability of exposing private messages that fall short of today’s acceptable attitudes. In-jokes are regularly taken out of context and used to condemn the joker. If a wrong thought has been expressed, even privately to friends or associates, it’s evidence of bad character and subject to public judgement. The use of Implicit Bias tests by employers shows that even subconscious thoughts are not safe from dissection and testing for impurity.

And since Covid struck, the ethical implications of our relationship with technology have only become more profound. Consider the use of “contact tracing” apps which, as Hare notes, could be more accurately described as “exposure notification apps” since they don’t facilitate contact tracing, because they’re designed to be anonymous.

We owe that design element to Google and Apple, who added to their operating systems the capacity for devices to exchange tokens with nearby devices, using Bluetooth, but only anonymously.

Many governments, including in UK and France, didn’t want this anonymous system. They wanted to know the patterns of who was infecting whom, as well as the possibility of contacting individuals to make sure they were isolating when they should. Arguably, public health might justify that kind of intrusion. But the tech giants didn’t want to gift governments information about who spends time within a few metres of whom.

Perhaps they were right — but either way, we never got to have that argument. Governments largely evaded public debate about the measures they used against Covid, and tech companies decided what capabilities to build into their products. The successes and failures of the apps were only partly technical; they were largely down to the wider strengths and failings of Covid policies. It is, after all, little use telling somebody they may have been infected if they can’t get a speedy test result, or if they can’t afford to self-isolate without sick pay. Again, no amount of ethical tech development would have changed that.

As the pandemic moved on, the attention of the UK government moved from “Contact Tracing Apps” that weren’t, to “Vaccine Passports” that weren’t, but were really. Despite almost universal rejection of the idea by health, cyber-security and equality organisations, and repeated insistence that the government was not considering vaccine passports, the Government funded eight pilot schemes for vaccine passports, and gave the green light for private organisations to demand them.

Ethicists went public with their many warnings: vaccine passports would be socially divisive, discriminatory, needlessly intrusive, and a perverse incentive to get infected. But none of this had any impact on government plans: principles like privacy and social solidarity found no resonance in opportunistic policymaking. Neither did the Government have the courage to argue that these undesirable measures were proportionate and necessary, and trust the public to accept exceptional infringements in exceptional times.

This is the other limitation of technology ethics: they are no match for power. Britain is currently an outlier among European and other governments that are imposing highly divisive and restrictive vaccine passport regimes, and using physical force against dissenters.

And yet masses of people do seem prepared to accept restrictions on their lives that would have been unthinkable a couple of years ago. Why are we so willing to accept that a condition of participation in public life is an app that affirms our medical status? Why have we been so willing to accept the repeated reduction of society to one household in one space, connected mainly through screens to the wider world?

One answer is that technology has ridden to the rescue. Without the constant ability to connect to that digital network — a network of other humans, as well as data — it would simply have been impossible for half the population to just stay home for months on end. Work would have been done in offices or not at all. Education would have been unable to stagger on, even in its unequal and truncated form. The severance of social connections would not have been a reduction to two-dimensional faces on screens, but near-complete isolation.

But that is only half an answer. A retreat from shared public space, alongside the penetration of private space by always-on connection, was far advanced before governments told us to Stay Home. When bedrooms and kitchen tables became classrooms and desks, was it so great a change from answering work emails from the sofa, or chatting about homework on screens across many teenage bedrooms?

And that earlier change, accelerated but not precipitated by the pandemic, was as much social as technological. Through technology, we can be apart, but never completely alone. Interacting through screens, we are insulated by space and — when we’re exchanging messages — by time. Public space is increasingly digital, so we need never be fully there, but with a smartphone there is no fully private space either.

This shift away from shared social lives began in the mid 20th century. We lead more solitary lives than our grandparents. We start families later, we belong to fewer social organisations. Most of all, we lack a shared framework of morality and ideas about a shared future. When difficult times hit us, we lack a foundation on which to base our judgments. This is true of us as individuals, but also as a society. No wonder our governments cast about for technological solutions to moral questions.

Hare quotes John von Neumann in 1954, testifying about the development of the atomic bomb:

“We were all little children with respect to the situation which had developed… None of us had been educated or conditioned to exist in this situation, and we had to make our rationalisation and our code of conduct as we went along.”

Today, it’s hard to find anyone who doesn’t act — and probably feel — like a young child when confronted with adult responsibilities.

Part of the appeal of technologies like AI is the fantasy that a machine can take the role of wise parent, immune to the emotion and unpredictability of mere humans. But this tells us less about the real capabilities of AI, and more about our disillusionment with ourselves

The urge to fix Covid, or other social problems, with technology springs from this lack of trust in other people. So does the cavalier disregard for privacy as an expression of moral autonomy. Technology ethics can’t save us, any more than technology can. Even during a pandemic, how we regard one another is the fundamental question at the root of ethics. So we do need to treat technology as just a tool, after all. Otherwise we risk being made its instruments in a world without morals.


Timandra Harkness presents the BBC Radio 4 series, FutureProofing and How To Disagree. Her book, Big Data: Does Size Matter? is published by Bloomsbury Sigma.

TimandraHarknes