Are you an AI tool? (hang Xiangyi/China News Service/Getty)


Kathleen Stock
3 Apr 2026 - 12:02am 6 mins

Human tribalism can make you long for the impersonality of robot justice. Last week gave us two examples of writers apparently reliant on AI to do the thinking for them, with each treated differently depending on social and political affiliations.

For Matt Goodwin — the failed Reform candidate whose new book is widely judged to contain several machine-generated hallucinations and other inaccuracies, though he denies it — there was widespread mockery and condemnation. But for critic Alex Preston, dumped by the New York Times for using an AI tool to help write a book review, objections were more muted. There were even attempts at sympathy and understanding — including from gleefully vituperative Goodwin-bashers. One might reasonably wonder how much the British commentariat really loathe AI writing, as opposed to how much they loathe Goodwin in particular.

In fact, both cases are equally unforgivable. Each goes much further than simply using AI as a research tool among others, cross-referring and checking sources as you go. To use an LLM to write text for you effectively turns your computer into a ventriloquist. And it turns you into the ventriloquist’s dummy.

This is not a good look. Up until five minutes ago, Goodwin was an academic. In the eyes of some, this makes him part of a despised elite, a fact which must be somewhat inconvenient to a would-be populist. But equally, it’s also a crucial part of his origin story, giving a valuable impression of epistemic authority to those that care. Otherwise he’d be just another bloke in a gilet muttering darkly about cousin marriage. At a minimum, academics are supposed to do their own research, double check their own inferences, and then write it all down in their own words. What else could society possibly want them for? Certainly not for the innate charisma or the bantz.

In his blustering Spectator rebuttal, Goodwin insists that he did not use chatbots to write any part of the book. The invented quotes and other misunderstandings were all his, he says, a claim which is apparently supposed to reassure us. He even cites the negative opinion of AI detection software on the matter “just for fun”. But either way, he seems to have been using AI tools for his Substack for some time. Or perhaps he sometimes writes exactly like a robot because he has unconsciously ingested the deadening format (“this is not x, it’s y”, compressed rhetorical questions, and all the rest of it). Crafting beautiful sentences is not your average academic’s strongest suit, after all.

Literary critics have no such excuse. Preston — book reviewer as well as gentleman novelist, amateur cricketer, birdwatcher, and BBC arts show stalwart — is said to have confessed immediately to his mistake when asked about it by the Spectator literary editor. The apology came as if delivered by Hugh Grant — you could practically hear the stammerings and goshes. “Oh god it’s awful and I’m so ashamed. Such a total car crash.”

Preston’s explanation was that he had been pressed for time, and asked a bot to help “expand and smooth” a draft book review, as well as adjust for in-house style requirements. The AI-enhanced result included phrases written by someone else, which he did not spot. “I looked at how it had tidied up the end of the review but didn’t realise that it had also dropped in language from Christobel Kent’s Guardian review. I was rushed and stupid and I’m so sorry.”

“Crafting beautiful sentences is not your average academic’s strongest suit, after all.”

Despite the apparent willingness of fellow literary luminaries to accept this with “there but for the grace of God … ” style mumbling, I’m not buying it. A loose reading of Preston’s explanation implies he didn’t notice the alien phrases at all. He used the tool without understanding its capacity for invention, looked only at the second half of the AI-adjusted text but not the first, then immediately sent the result off to the New York Times. But surely no writer is ever that stupid, or indeed that rushed.

A more likely scenario is that he read the tarted-up review in its entirety, knew full well there were sentences in it that were not his, but didn’t know they belonged to Christobel Kent in particular. Preston says that this was a “case of someone naively and clumsily using a tool they didn’t understand”; but he obviously grasped that the tool would “expand” his piece, because that’s exactly what he said he wanted it to do. What he didn’t understand, perhaps, was that the machine might plagiarise from a single source rather than synthesise hundreds of texts undetectably.

But appropriating the words of no one in particular is just as bad as plagiarism; and it is no defence to say it was only a few phrases. Like the proverbial curate’s egg, a covert robot co-production cannot be good in parts. You might as well take out all the machine-inserted words and phrases and write lorem ipsum there, or just blah blah blah instead. These passages mark the point at which the author’s mind retreated and went off for a nap.

When I taught in universities, I used to find it absurd that the official policy for plagiarism cases required us to estimate what mark the offending piece would have got, were it not plagiarised; and then to subtract some fixed percentage from that. Effectively, this seemed to force the marker onto the horns of a ludicrous dilemma. On one hand you might remove all plagiarised sections from consideration, but then you would be left with surreally disjointed statements and huge gaps, and there would be not much of an identifiable argument left to penalise further.

Alternatively you could treat the essay in its entirety as if it represented the original thoughts of the student, then subtract a bit; in which case your mark would seem to depend heavily on the quality of the author from whom sentences had been pilfered, and how extensively they were used. It seemed weirdly arbitrary to be giving more credit to students who thoroughly stripped down Ludwig Wittgenstein for parts, than to those who lightly borrowed from, say, Alain de Botton.

When you are assessing a student’s work, it is a basic presupposition that you are engaging with a single point of view, for which the stated author can rightly claim the credit. Any contradictions in the text reflect contradictions in the author’s own mind, as do any brilliant insights. Violate that contract and you might as well chuck the whole thing away. And the same goes all the more for professional writing, whether high or lowbrow. Relying on words that aren’t yours is not just lazy, it’s cowardly. If a writer lacks the courage to expose his mind to readers in all its naked imperfection, hiding behind the smooth extrusions of LLMS instead, then I don’t see why we should bother to read him.

And it is a poor defence to say that readers often cannot tell the difference. In the world of painting, skilful forgeries can look like astonishingly powerful originals until the ruse is revealed. Then suddenly there’s a gestalt switch, and the pictures look like what they always were: cheap fakes, free-riding on viewer credulity.

A different justification urges us to think of all human communication as a kind of ventriloquising. Aren’t we always taking half-remembered words and phrases from other people and making them our own as we speak or write? Can’t we just see the use of an LLM as a simple extension of the creative process, with some externally-derived thoughts accepted, and others rejected — just as during a more internal process of reflection? This is part of the general project of redescribing human behaviour to make it sound more like that of machines. That project is just as likely to undermine the impression of any meaning or value in human behaviour than it is to reduce the distrust in robots. But in any case, we must not cede the point. Ordinary thinking and speaking as we know it is not “parroting” or “plagiarising”. If it were, there would be no sense to these terms when applied critically, since the pejorative contrast with ordinary speech would be incomprehensible.

Even if it’s largely undetected by the intended readership, I can’t help feeling that AI will turn out to be a curse to many in the writing world, both established and aspiring. Clearly, as Goodwin and Preston show us, it is a temptation even to the already successful. A semantic glow-up is available at just the click of a mouse. Lots of bloggers I am aware of seem to use LLMs habitually. Preposterously, I’ve even seen the telltale signs in comments sections.

Wherever it is found, I imagine it’s quite stressful to gain a reputation for a writing style you have done little to deserve. The more admiration and followers you get for your punchy sentences, the less you will feel able to go back to your own deathless prose and reveal to the world what is really going on behind the curtain. In years to come, I expect we will see whole branches of therapy devoted to curing writers of LLM addiction. It seems that along with photographic filters and image generators, technology has discovered yet another way for people to hide their true selves from the gaze of others — and all this in a time where personal authenticity is supposedly uniquely prized. As the saying goes: you couldn’t make it up.


Kathleen Stock is contributing editor at UnHerd.
Docstockk