War is competition. It is about beating the other side. It is about arms races, and domination, and victory. This competition takes many forms – from the individual soldier trying to kill their opposite number, to the politician and general trying to out-strategise theirs. It is about logistics teams and factory workers rushing to get materiel to where it is needed. And it is about scientists and experts seeking to understand and harness future technologies to give their side the edge.
When you consider war like this, the recent pledge not to develop lethal artificial intelligence (AI) weapons systems signed by 236 organisations and 3010 individuals, including Elon Musk and the founders of Google DeepMind, seems odd. Or rather, it seems pointless and self-aggrandising. Others — individuals, organisations, and countries — will continue to do so. That these individuals choose not to develop weapons capable of “selecting and engaging targets without human intervention” because they pose serious moral and pragmatic issues does not mean that others won’t.
AI will change warfare to such a degree that war will be unrecognisable from its current (and historical) form. AI will almost certainly be the single most important technological innovation in the field of conflict, altering not just the methods with which we fight, but also the very foundational dynamics of conflict.
Here is what I mean: not only will AI enable a country to develop swarms of interlinked micro-drones—a very useful and relatively cheap method to, for example, defeat an enemy aircraft carrier battle group —but when that AI-controlled swarm faces an opponent that also has an AI-controlled offensive system, we have no idea how one system would defeat the other in combat. The unknowns in the sphere of AI warfare are huge.
These technological uncertainties come as the world experiences an era of increased geopolitical tension: India-China; Russia-Europe; and further out, Russia-China. But it is between the United States and China that these tensions will most obviously play out. And it is in this conflict that AI in warfare will come to fruition, much like nuclear weapons in the second world war, or the tank in the first.
China knows this, and is preparing accordingly: a July 2017 State Council document makes it official government policy for China to become world dominant in AI by 2030. Of course, policy and outcomes are different things, but as an authoritarian government with technology companies that are closely linked to the state, China is in a good position to make it reality.
All private companies in China, for instance, are obliged to have their own Party Committees, and the three biggest Chinese tech firms — Baidu, Alibaba and Tencent — all have joint R&D labs with the government. The relationship is highly lopsided: the Chinese government recently forced an apology and a crackdown on ‘undesired’ content from a video sharing platform called Toutiao.
Finally, China has the resources — money, people, and data — to make it happen. This is not just state funds being invested in tech firms, but also the tax breaks and other advantages associated with special economic zones like the Xiongan New Area, built to showcase President Xi’s vision of a state-led digital city. It is the thousands of digital engineers trained each year who work ‘996’: 9am to 9pm, 6 days a week.
But most importantly, it is data: China is the biggest homogenised data-rich economy in the world. Mobile payments, for instance, are fifty times what they are in the US. All of these transactions generate data, and data is what AI systems ‘train’ on to feed their development. China’s data advantage is huge, and this is only beginning to be understood by Western governments.
Where does this leave countries like the US with independent tech sectors, barely controlled by the government? Or worse, in some cases, where tech leaders with strong libertarian ideals openly advocate for technologies like block chain to reduce permanently the power that governments have over populations? Where does it leave countries like the UK that have crippled their defence budgets by buying aircraft carriers of dubious military value? And what about countries where the leading lights of AI sign pledges to state that they will not work on military AI?
Perhaps a better way of framing this question is where we want to get to eventually. I would argue that we want to get to a global arms control agreement on the use of AI for military purposes. Indeed, that is what Musk and his fellow pledgers called for, asking governments “to create a future with strong international norms, regulations and laws against lethal autonomous weapons.”
The problem, however, is that a brief look at other arms control agreements – of chemical, biological or nuclear weapons, or of missile technology – demonstrates that this is unlikely to work. History tells us that such agreements come about when all sides feel that they have too much to lose by developing or deploying a particular technology. In other words, that all sides are in a potentially mutually hurting stalemate. The quickest way, therefore, to get to a global arms control agreement on the military use of AI would be for the United States and its allies to develop world-class systems – this, more than anything else, will force a global consensus to emerge on their highly destructive use. This idealistic pledge stands in the way of that.