It was an outrage many will remember. On December 2, 2015, Syed Rizwan Farook and Tashfeen Malik burst into a Christmas party held by their employers in San Bernardino, California. Pledging support for ISIS, they began shooting immediately, leaving 14 people dead and injuring two dozen others.
It was the deadliest mass shooting in the US for years, yet neither Farook nor Malik had criminal records, and neither were on any terrorist watch list. In the months that followed, the FBI began to grapple with what they saw to be a troubling new phenomenon: online self-radicalisation. Painstakingly building a portrait of Farook and Malik’s life, they tried to learn why both had decided to die for a terrorist group neither had actually met. And to do that, Farook’s Apple iPhone became central to the search. The FBI asked Apple to unlock it, and Apple refused. A court order compelled Apple to open the phone, but again they said they’d legally fight it.
There was nothing unusual about this conflict. The FBI’s overwhelming mandate was to learn as much as they could about Farook to protect public safety. Apple’s, especially in the wake of the Snowden disclosures, was to assure their users that their data was secure – even those who turned out to be terrorists. And both sides saw the argument as far bigger than this single case. Both knew that it would set a new precedent for how security should be balanced against privacy; and both knew that the world was watching, with almost exactly half of the public supporting each position.
The debate was old; what was new was the way it was resolved. Just months before the attack, Apple had released an operating system that meant that Apple itself could not decrypt the device. Technology, in other words, that deliberately made its maker unable to comply with the court order they now faced, or any of the others they anticipated would emerge in the years ahead. The court then ruled that Apple had to write new software to crack the phone, but as Apple prepared to fight that case too, the FBI promptly withdrew the claim. Before the legal process could be concluded or enforced, the FBI had paid professional hackers to find an undisclosed flaw in the software, and broken it open themselves.
The dangerous power of the clickbait king
For both Apple and the FBI, the battle in the courts had been a sideshow. That delicate, constantly changing trade-off of between privacy and security wasn’t decided by the finer principles of law, courtly argument and judicial interpretation. The decider was the technology itself. Apple had built technology to make the court order impossible. And the FBI had bought technology to mean they didn’t need Apple to carry it out.
The dispute was about something bigger than this single case. It showed that how the tensions, trade-offs and conflicts that we’ve always had are being resolved in an entirely new way.
Technologists have long known that the key questions of the digital age wouldn’t be decided at the ballot box or in the courts. Changing the world, they realised, didn’t mean winning the argument. It meant building new technology to decide the outcome. Technology, they knew, could make things possible or impossible, easy or difficult. You didn’t need to convince people to change, you simply needed to change the options that people had in the first place.
For the disputes that matter most, public debate and legal argument have, just like in San Bernardino, become something of an undercard billing. The main fight is over building technology to simply render the whole moral debate meaningless. In the red corner, technology is being built to usher in one kind of reality, and in the blue corner, technology is being built to stop them. Look almost anywhere today, and you can see the same thing happening. Technologies ‘duel’ with each other to decide what your life – and the world – is like.
Bitcoin is the most famous of this kind of political-technology. From the very beginning its aim was clear: to transform the status quo of corporations and states into one of decentralised networks. If you could build technology that created things like currencies without central banks, they reasoned, you simply make those central banks unnecessary.
Changing the structure of technology was really about changing the structure of society. Decentralised technology would create a similarly decentralised society, one without the concentrations of data, control and yes, power, that centralised systems inevitably produce. Bitcoin was only one of a whole universe of different technologies all trying to achieve this.
Decentralised money, decentralised social media platforms, decentralised email, decentralised shopping, all were aimed at undermining the states, hierarchies and corporations dominant, as they saw it, within the status quo. They saw the digital revolution as an opportunity to break apart old concentrations of power, not to create new ones. The old world pitted against the new. Old concentrations of power versus new ways of dispersing it.
Your privacy is also being decided by duelling technology, too. Apple announces new features on its browser to prevent Facebook and other data companies tracking its users across the web, while ad tech companies quietly build ingenuous, innovative ways of ‘finger-printing’ you as you travel around the web. These technologies duel generally out of sight – and it is largely them, not the body of cumbersome, slow moving privacy law, that decides in practice what your privacy online is.
How the tech lords privatised the public square
The most obvious arena for this technological combat is online security. Go to DEF CON, a huge gathering of hackers and computer security specialists, and you will witness the thousands of technological struggles happening every day, side-by-side, that determine your online safety. Hackers – largely with the same skills – work as offence and defence: inventing entirely new ways to take control of your digital life, and desperately trying to find ways to plug the gaps. What determines your safety online is the outcome of this struggle: what is technologically possible, not what is legal.
Worryingly, these struggles are usually entirely invisible to us. You don’t see how spammers, marketers and even states constantly build bots and fake accounts to avoid detection, and people within Facebook and Twitter build technology to try to catch them. Likewise you don’t see the struggle over who controls the information you consume – behind the scenes some technologists work to game and manipulate search engine results, while Google and others desperately try to stop them.
The grand moral questions of our age are not being decided by public debate, the law, norms, principles or the conventional tumble of politics. They are being decided by a new dynamic, something I’ll call ‘technological achievability’. A clunky phrase, granted, but I think an important one. Is it easier and cheaper for cyber-criminals to find vulnerabilities in your computer, or is it easier for cyber-security specialists to keep you safe? Is it easier for propagandists to build fake accounts, or for the tech giants to detect and remove them? Which outcome, in other words, is it easier, cheaper, or more convenient for technology to be built to achieve.
Of the many shifts in power I’ve spent years trying to understand, this is perhaps the most important. The technologists on the front-lines of these struggles now have enormous moral agency in their hands; those of us who are not blockchain engineers or cryptography specialists, are shut out. However imperfect, the messy web of public debate, norms, law and politics was at least something we could all be involved in. Now we are trapped in the crossfire of the technologies duelling around us. More like spectators than we ever were, as our lives are shaped and moulded in this new way. More disempowered, I think, than we have ever been in the decisions that really matter, that really decide what our lives will be like.
And so, in front of us, a new struggle lies ahead. In an age of blindingly fast technological progress, we need to make morals matter again. How to do that, I don’t know. But the first step is to realise how dreadfully absent they are from the world as it is today.
Carl Miller’s newly released The Death of the Gods: The New Global Power Grab is out now.