May 4, 2023 - 1:15pm

Geoffrey Hinton has been described as the “Godfather of AI”. And so, when he warned the world about the dangers of artificial intelligence this week, the world listened. 

His interview with Cade Metz for the New York Times is worth reading in full, but the key passage reveals Hinton’s surprise at the speed of recent developments:

The idea that this stuff could actually get smarter than people — a few people believed that… But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
- Geoffrey Hinton, NYT

Hinton also reiterated his opposition to the use of AI on the battlefield. However, that horse has already bolted. For instance, a few days before the cognitive psychologist voiced his concerns, Palantir Technologies announced its Artificial Intelligence Platform for Defence — which applies the latest AI capabilities to drone warfare. 

These include the rapidly advancing Large Language Models that have prompted leading tech experts to plead for a six month pause on AI experimentation. The argument is that we need a breathing space to tame the technology before its evolution overwhelms us. In this regard, the top priority is the alignment problem — i.e. ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.

I’ve already written about the obstacles to an AI moratorium — not least, getting the Chinese to agree to one. But there’s something else we need to be honest about, which is that it won’t be a committee of the great and the good that does the definitive work on alignment. Rather, the pioneers will be the world’s militaries and their hi-tech suppliers like Palantir. 

Don’t forget that the real challenge of alignment isn’t in respect to the AI systems we’ve already got, but to the prospect of AGI, or artificial general intelligence. Vanilla AI is limited in its abilities but, in theory, AGI could be applied — indeed, could apply itself — to any cognitive feat that a human being is capable of. Given this much wider scope of action, the task of aligning AGI with human interests is correspondingly harder.

Military AI systems are about as close as we currently get to AGI. They control mobile bits of machinery (i.e. drones), which operate in messy real world environments (i.e. battlefields), and which execute life-or-death decisions (i.e. killing people). Such systems need to work within legal and ethical frameworks, distinguishing friend from foe and combatants from innocents. Further, the more advanced this technology gets, the more capable it will be of making these calls independently of human input. 

Of course, we could soon see civilian AI systems making equally complex and morally-loaded decisions — driverless cars, for instance. However, the military systems are already out there, engaged in a struggle for supremacy. 

So it is in this context — at the literal bleeding-edge of the technology — that the question of AI alignment is being grappled with. The first draft of humanity’s answer will be a military one. Whether we like it or not, a military-AI complex is already shaping the future. The best that we can hope for is that it is our complex.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_