January 6, 2025 - 10:00am

AI researchers have reportedly invented a machine that will do civil debate for us, so we don’t have to. Researchers from Google’s DeepMind and the University of Oxford have developed an AI system that can digest clashing opinions and come up with a compromise that helps everyone meet in the middle.

The system is nicknamed a “Habermas machine”, after the philosopher Jurgen Habermas, who argued that, provided you get the conditions for debate right, people will agree more than they disagree. The researchers suggest that the Habermas machine is an improvement on the current norm of conducting political debate via social media, where algorithmic incentives tend to drive polarisation and increase divisions and tribalism. By contrast, they say, this compromise robot offers a terrain on which to find common ground that, according to test users, feels more neutral. The developers report that they’re already in talks with several partners for deploying the Habermas machine in the real world.

This raises several concerns. Firstly, the Habermas machine may work well enough on its own terms; but a likely consequence would be displacing political disagreement into the programming of the machine itself. In other words: who is in charge of setting the robot’s parameters, and what are their political priors? It’s already clear from battles within Silicon Valley that “AI alignment” is often in practice a euphemism for ideological groups seeking to pre-programme a supposedly neutral AI with their preferred political biases. Such efforts have occasionally surreal results, as for example last year when Google’s Gemini picture bot simply refused to generate images of white people, even when asked to depict historical figures such as America’s Founding Fathers.

It seems likely that even (or perhaps especially) a purportedly neutral engine, offered to the public as a means of generating political compromise, would be carefully pre-weighted according to someone’s political priors. The question then becomes: whose? (The already-noted ideological skew in Google’s Gemini bot may provide a clue as to the likely answer.) The absurd logical endpoint of inability to agree on who gets to catechise the Habermas engine, then, would be an infinite regress of Habermas engines trying to solve this disagreement.

A second concern follows from this. One of the core beliefs underlying modern liberal democracy is that disagreements aren’t ultimately resolved by neutralisation, but by a fundamentally human, interpersonal, social process. That is, we find workable agreements on even difficult issues, via public debate, even if the debate itself is sometimes fractious. Accepting that we are no longer capable of engaging in such debate without a robot to scaffold our deliberations, then, is nothing less than a tacit admission that one of the core enabling conditions for liberal democracy no longer holds.

Is this reversible? Who knows; but we can be sure that widespread adoption of the Habermas engine would make it worse, not better. Politics is a fundamentally human, interpersonal activity, that presupposes enabling social norms and habits of mind. And just as unused muscles atrophy, so too do unused cognitive and social skills.

There are already, for example, couples in Silicon Valley who mediate all relationship disagreements via the AI engine Claude. What happens when we scale that up? Just as people no longer bother to learn spelling or grammar because Microsoft does all that for them, we will no longer bother to learn how to understand and assimilate others’ views because we have the political equivalent of a spellchecker to do the hard bit. In other words, should the Habermas engine be widely adopted, its likely effect would be to accelerate the decline it was invented to mitigate.

And this in turn would hasten the already evident withdrawal of actual politics from broad-based democratic participation, into the hands of a caste of specially-trained “experts”. Provided their capacity to do so isn’t hopelessly atrophied, reasonable people may disagree on whether this is desirable, or indeed whether it’s already irreversibly happened. But even if it’s presented as a machine for saving liberal democracy, we should be under no illusions about the role that would be played in the ongoing collapse of such a democracy by outsourcing our capacity for constructive disagreement to a machine.


Mary Harrington is a contributing editor at UnHerd.

moveincircles