AI researchers have reportedly invented a machine that will do civil debate for us, so we don’t have to. Researchers from Google’s DeepMind and the University of Oxford have developed an AI system that can digest clashing opinions and come up with a compromise that helps everyone meet in the middle.
The system is nicknamed a “Habermas machine”, after the philosopher Jurgen Habermas, who argued that, provided you get the conditions for debate right, people will agree more than they disagree. The researchers suggest that the Habermas machine is an improvement on the current norm of conducting political debate via social media, where algorithmic incentives tend to drive polarisation and increase divisions and tribalism. By contrast, they say, this compromise robot offers a terrain on which to find common ground that, according to test users, feels more neutral. The developers report that they’re already in talks with several partners for deploying the Habermas machine in the real world.
This raises several concerns. Firstly, the Habermas machine may work well enough on its own terms; but a likely consequence would be displacing political disagreement into the programming of the machine itself. In other words: who is in charge of setting the robot’s parameters, and what are their political priors? It’s already clear from battles within Silicon Valley that “AI alignment” is often in practice a euphemism for ideological groups seeking to pre-programme a supposedly neutral AI with their preferred political biases. Such efforts have occasionally surreal results, as for example last year when Google’s Gemini picture bot simply refused to generate images of white people, even when asked to depict historical figures such as America’s Founding Fathers.
It seems likely that even (or perhaps especially) a purportedly neutral engine, offered to the public as a means of generating political compromise, would be carefully pre-weighted according to someone’s political priors. The question then becomes: whose? (The already-noted ideological skew in Google’s Gemini bot may provide a clue as to the likely answer.) The absurd logical endpoint of inability to agree on who gets to catechise the Habermas engine, then, would be an infinite regress of Habermas engines trying to solve this disagreement.
A second concern follows from this. One of the core beliefs underlying modern liberal democracy is that disagreements aren’t ultimately resolved by neutralisation, but by a fundamentally human, interpersonal, social process. That is, we find workable agreements on even difficult issues, via public debate, even if the debate itself is sometimes fractious. Accepting that we are no longer capable of engaging in such debate without a robot to scaffold our deliberations, then, is nothing less than a tacit admission that one of the core enabling conditions for liberal democracy no longer holds.
Is this reversible? Who knows; but we can be sure that widespread adoption of the Habermas engine would make it worse, not better. Politics is a fundamentally human, interpersonal activity, that presupposes enabling social norms and habits of mind. And just as unused muscles atrophy, so too do unused cognitive and social skills.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeCompromise in the middle is not quite what a rational middle of the road real individual is in favour of if the proposition is: Kill all the Kulaks or Kill none of the Kulaks and the middle compromise is kill 50% of the Kulaks. Sometimes it is not possible to meet in the middle when what is being proposed by one side is really utterly unacceptable.
The whole idea of “meeting in the middle” is intrinsically bogus – (a bit like the former UK Liberal Party).
In the Western world of 2025, a sane ‘middle ground’ would – on a Left-to-Right spectrum – require moving 90% to the Right.
We in the Western world inhabit a culture whos ‘centre’ has been wildly distorted by a Lefty intelligentsia cossetted – for half a century and more – in its edcucation systems: https://grahamcunningham.substack.com/p/the-madness-of-intelligentsias
Ironically, you have just proven your own point.
Habermas? Marxism for academy with the proletariat expunged. Very neutral.
These freaks from Google are trying hard to turn people into animals, not noticing that they themselves have already turned into evil animals
P.S. I remembered a short science fiction story that strangely corresponds to this article. I recommend that those who wish to read it read it. It won’t take long:
.
https://weirdfictionreview.com/2016/08/day-of-wrath/
.
The preface says that the story takes place in the Russian countryside. This makes me nervously laugh at the depth of the author’s understanding of Russian life, although here on the UnHerd website, reading articles and comments to them, I should have gotten used to it, here you can find something even more amazing.
But the short story is good.
We’ve never stopped being animals.
Thank you for the reference. Science fiction is now our best source of insight.
Evil animals???
Coming back to oracles, bones and animal entrails divination by the expert class. It worked before, could work again. Liberal democracy? Oh , it’s done, won’t survive this century, probably even less
I and many people I know tend to be a bit Right-ish on some issues and a bit Left-ish on others. Deciding each issue right down the middle doesn’t address our wishes at all. We’ll still be without representation in the halls of Gubment. Only now it’ll be an accepted aspect of governing; a cultural orthodoxy with an inertia of its own.
This is a bad idea who’s likely result is the further stupification of all of us. Why do we keep falling for every computer nerd idea that comes around the corner?
Does anyone know what became of those couples who let the computer decide for them? I bet there’s a good story there.
Can I suggest they test it out on the current trans v terf debate. Not my fault if it overheats. A position can be neutral without being acceptable to either side in the debate. Indeed both sides may well get even more angry.
I first thought it might come up with something like: trans women can only use public toilets on Mon, Wed, Fri and Saturday morning.
Then I thought: that’s not such a bad idea. Trans shopping days and Terf shopping days – with the rest of us shopping whenever we like. OK, next swimming pools …..
I think the Habermas solution is what we have. You can be certified as the opposite sex provided you go through a relatively elaborate process but can’t yet officially self identify. A ridiculous compromise but Habermas it probably is given how far the Overton Window has shifted leftwards.
As someone who’s been into IT since the Apple IIe, AI worries me deeply. Who said it would make the best of us more productive and the rest of us dumber?
Two examples — Apple’s autocorrect has already screwed the apostrophe, and when I asked ChatGPT a topical question about 19th century NZ land tenure, I questioned its answer and it apologised and corrected itself. How many people would know enough or care enough to ask just one follow-up question?
And now Mary has ruined my day by introducing me to Habermas.
I’m starting to feel like an ant with a bloody great boot over my head. Oh well, off to walk the dog.
Oh brave new world, that has such machines in it.
Ah yes, the specter of techno-fascism rears its head once again and once again it’s Google at the center of the controversy. Hardly surprising, but thankfully this time it’s pretty irrelevant, because the whole point of government is to resolve disputes, both internal disputes between individuals and groups of citizens and externally with other nations through diplomacy and warfare. Making a robot that solves political questions is the logical equivalent of saying we should just throw out all that government stuff. Let’s have no more courts, no more juries, no more elections, no more Congress, and let the computer make all decisions and resolve all disputes.
Perhaps that would work if everybody agreed, but when has anybody ever got everybody to just blindly accept anything at all, ever? Getting people to accept a way to resolve social disputes and to accept governmental authority over some aspects of one’s autonomy is itself the basic function of political systems. In other words, the question of ‘what political system will we have’ is itself a political question. Wonder how Google’s machine would react to being put the political question of what system of government people should have? Would it answer that since it is the best method of resolving disputes therefore should be absolute ruler and arbitrator? Would it just whir along caught in some recursive conflict resolution logic until it overheated or exploded? That would be a truly interesting AI experiment.
What this machine really does is allow corporations to eliminate some of their HR conflict resolution employees and save money. When you get a job with the company, you agree to let the conflict resolution machine resolve whatever conflict according to corporate policy. Really, this is all generative AI is and maybe ever will be, a way to replace marginally useful but highly overpaid white collar workers. Get your new HR bot to solve all your HR needs. New models for accounting, finance, and marketing coming soon.
Soothing voice: Now, now, Mary. What you say of course is true, but others might disagree, as I’m sure you would concur if you just take a step back. Somewhere between what you believe and others might think, I’m sure agreement can be reached that will be closer to what is so in a moderate sense than is to found at the extreme of either direction. Do you like the music? I can change if you want. There is a little something extra in your drink to relax you further. Would you like a second?