Geoffrey Hinton has been described as the “Godfather of AI”. And so, when he warned the world about the dangers of artificial intelligence this week, the world listened.
His interview with Cade Metz for the New York Times is worth reading in full, but the key passage reveals Hinton’s surprise at the speed of recent developments:
Hinton also reiterated his opposition to the use of AI on the battlefield. However, that horse has already bolted. For instance, a few days before the cognitive psychologist voiced his concerns, Palantir Technologies announced its Artificial Intelligence Platform for Defence — which applies the latest AI capabilities to drone warfare.
These include the rapidly advancing Large Language Models that have prompted leading tech experts to plead for a six month pause on AI experimentation. The argument is that we need a breathing space to tame the technology before its evolution overwhelms us. In this regard, the top priority is the alignment problem — i.e. ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.
I’ve already written about the obstacles to an AI moratorium — not least, getting the Chinese to agree to one. But there’s something else we need to be honest about, which is that it won’t be a committee of the great and the good that does the definitive work on alignment. Rather, the pioneers will be the world’s militaries and their hi-tech suppliers like Palantir.
Don’t forget that the real challenge of alignment isn’t in respect to the AI systems we’ve already got, but to the prospect of AGI, or artificial general intelligence. Vanilla AI is limited in its abilities but, in theory, AGI could be applied — indeed, could apply itself — to any cognitive feat that a human being is capable of. Given this much wider scope of action, the task of aligning AGI with human interests is correspondingly harder.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe“… ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.” Yeah, just like gain of function research into viruses. I’m afraid the toothpaste is out of the tube.
Trust the science
Right?! And just like the ethical framework we all operate within today WITHOUT the potential
hazardsbenefits of AI.Trust the science
Right?! And just like the ethical framework we all operate within today WITHOUT the potential
hazardsbenefits of AI.“… ensuring that AI systems do what we want them to do, ideally within a sound ethical framework.” Yeah, just like gain of function research into viruses. I’m afraid the toothpaste is out of the tube.
I do wish that we had a way of hardwiring Asimov’s three laws of robotics into the developing AIs.
Asimov’s stories tend to be about how the three laws are insufficient…
In his later works, Foundation’s Edge for example, the robots are very humanlike – to the point of seeming to have emotions.
In his later works, Foundation’s Edge for example, the robots are very humanlike – to the point of seeming to have emotions.
Asimov’s stories tend to be about how the three laws are insufficient…
I do wish that we had a way of hardwiring Asimov’s three laws of robotics into the developing AIs.
“Boots on the ground” as us Guardsmen say, will never be totally replaced by technology.
“Boots on the ground” as us Guardsmen say, will never be totally replaced by technology.
So the real question is not “Will AI systems be more intelligent than humans?” but “Will our AI systems be more intelligent than those of our potential adversaries?”.
So the real question is not “Will AI systems be more intelligent than humans?” but “Will our AI systems be more intelligent than those of our potential adversaries?”.
War Studies relative just back from Ukraine doing field research on drones in warfare. Very scary stuff already going on.
War Studies relative just back from Ukraine doing field research on drones in warfare. Very scary stuff already going on.
Umm, which sort of ethics are meant to guide AI alignment? The ones we deploy in animal agriculture, or the ones we appreciate in a university humanities course? Or the ones which emerged from the Enlightenment, minus the occasional slavery bit?
Umm, which sort of ethics are meant to guide AI alignment? The ones we deploy in animal agriculture, or the ones we appreciate in a university humanities course? Or the ones which emerged from the Enlightenment, minus the occasional slavery bit?