X Close

Beware the military-AI complex

Drones are already shaping the future of war. Credit: Getty

May 4, 2023 - 1:15pm

Geoffrey Hinton has been described as the “Godfather of AI”. And so, when he warned the world about the dangers of artificial intelligence this week, the world listened. 

His interview with Cade Metz for the New York Times is worth reading in full, but the key passage reveals Hinton’s surprise at the speed of recent developments:

The idea that this stuff could actually get smarter than people — a few people believed that… But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
- Geoffrey Hinton, NYT

Hinton also reiterated his opposition to the use of AI on the battlefield. However, that horse has already bolted. For instance, a few days before the cognitive psychologist voiced his concerns, Palantir Technologies announced its Artificial Intelligence Platform for Defence — which applies the latest AI capabilities to drone warfare. 

These include the rapidly advancing Large Language Models that have prompted leading tech experts to plead for a six month pause on AI experimentation. The argument is that we need a breathing space to tame the technology before its evolution overwhelms us. In this regard, the top priority is the alignment problem — i.e. ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.

I’ve already written about the obstacles to an AI moratorium — not least, getting the Chinese to agree to one. But there’s something else we need to be honest about, which is that it won’t be a committee of the great and the good that does the definitive work on alignment. Rather, the pioneers will be the world’s militaries and their hi-tech suppliers like Palantir. 

Don’t forget that the real challenge of alignment isn’t in respect to the AI systems we’ve already got, but to the prospect of AGI, or artificial general intelligence. Vanilla AI is limited in its abilities but, in theory, AGI could be applied — indeed, could apply itself — to any cognitive feat that a human being is capable of. Given this much wider scope of action, the task of aligning AGI with human interests is correspondingly harder.

Military AI systems are about as close as we currently get to AGI. They control mobile bits of machinery (i.e. drones), which operate in messy real world environments (i.e. battlefields), and which execute life-or-death decisions (i.e. killing people). Such systems need to work within legal and ethical frameworks, distinguishing friend from foe and combatants from innocents. Further, the more advanced this technology gets, the more capable it will be of making these calls independently of human input. 

Of course, we could soon see civilian AI systems making equally complex and morally-loaded decisions — driverless cars, for instance. However, the military systems are already out there, engaged in a struggle for supremacy. 

So it is in this context — at the literal bleeding-edge of the technology — that the question of AI alignment is being grappled with. The first draft of humanity’s answer will be a military one. Whether we like it or not, a military-AI complex is already shaping the future. The best that we can hope for is that it is our complex.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

10 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Simon Blanchard
Simon Blanchard
11 months ago

“… ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.” Yeah, just like gain of function research into viruses. I’m afraid the toothpaste is out of the tube.

Ethniciodo Rodenydo
Ethniciodo Rodenydo
11 months ago

Trust the science

Warren Trees
Warren Trees
11 months ago

Right?! And just like the ethical framework we all operate within today WITHOUT the potential hazards benefits of AI.

Ethniciodo Rodenydo
Ethniciodo Rodenydo
11 months ago

Trust the science

Warren Trees
Warren Trees
11 months ago

Right?! And just like the ethical framework we all operate within today WITHOUT the potential hazards benefits of AI.

Simon Blanchard
Simon Blanchard
11 months ago

“… ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.” Yeah, just like gain of function research into viruses. I’m afraid the toothpaste is out of the tube.

Paolo Canonica
Paolo Canonica
11 months ago

I do wish that we had a way of hardwiring Asimov’s three laws of robotics into the developing AIs.

R Wright
R Wright
11 months ago
Reply to  Paolo Canonica

Asimov’s stories tend to be about how the three laws are insufficient…

N Satori
N Satori
11 months ago
Reply to  R Wright

In his later works, Foundation’s Edge for example, the robots are very humanlike – to the point of seeming to have emotions.

N Satori
N Satori
11 months ago
Reply to  R Wright

In his later works, Foundation’s Edge for example, the robots are very humanlike – to the point of seeming to have emotions.

R Wright
R Wright
11 months ago
Reply to  Paolo Canonica

Asimov’s stories tend to be about how the three laws are insufficient…

Paolo Canonica
Paolo Canonica
11 months ago

I do wish that we had a way of hardwiring Asimov’s three laws of robotics into the developing AIs.

Nicky Samengo-Turner
Nicky Samengo-Turner
11 months ago

“Boots on the ground” as us Guardsmen say, will never be totally replaced by technology.

Nicky Samengo-Turner
Nicky Samengo-Turner
11 months ago

“Boots on the ground” as us Guardsmen say, will never be totally replaced by technology.

N Satori
N Satori
11 months ago

So the real question is not “Will AI systems be more intelligent than humans?” but “Will our AI systems be more intelligent than those of our potential adversaries?”.

N Satori
N Satori
11 months ago

So the real question is not “Will AI systems be more intelligent than humans?” but “Will our AI systems be more intelligent than those of our potential adversaries?”.

Susan Grabston
Susan Grabston
11 months ago

War Studies relative just back from Ukraine doing field research on drones in warfare. Very scary stuff already going on.

Susan Grabston
Susan Grabston
11 months ago

War Studies relative just back from Ukraine doing field research on drones in warfare. Very scary stuff already going on.

Shale Lewis
Shale Lewis
11 months ago

Umm, which sort of ethics are meant to guide AI alignment? The ones we deploy in animal agriculture, or the ones we appreciate in a university humanities course? Or the ones which emerged from the Enlightenment, minus the occasional slavery bit?

Shale Lewis
Shale Lewis
11 months ago

Umm, which sort of ethics are meant to guide AI alignment? The ones we deploy in animal agriculture, or the ones we appreciate in a university humanities course? Or the ones which emerged from the Enlightenment, minus the occasional slavery bit?