Who owns Liz Kendall’s bikini pics? Answering that is where the 21st century begins. Over the past few weeks, AI-generated images have plagued X, leading to distress from users as their likeness has been morphed into new, often unwelcome images. The likes of Kendall have been forcibly photoshopped into bikinis by site users, via Grok’s image generation software. Prime Minister Keir Starmer and others have subsequently suggested that X could be banned in the UK altogether if it failed to address the problem.
X has subsequently limited the production of such images, initially so that the technology could only be used by paying members, but chose to later ban the tool outright. But this week, Kendall told Parliament that “this does not go anywhere near far enough,” and linked the matter to the sexualisation of children.
The truth is, we have no inherent right to our own image. We can own the photo, but we cannot own ourselves as represented beings in any general and freestanding sense.
In the English system, the absence of this right is nipped and tucked at its edges by a patchwork of privacy law, harassment law, and defamation. The new suite of narrowly tailored image-based abuse offences protects aspects of our images without conferring a full right to our own image. Pretty much everything done by an AI could equally well have been done by a malign nerd in Photoshop, 20 years ago. Or Hogarth 300 years ago.
What’s changed is that the speed and scope of AI image generation have made the question unavoidable. Day to day, the likes of Grok and ChatGPT will act in line with governments and regulators, because it is in their interests to keep their platforms free of filth and on the right side of bureaucracy, rather than risking fines or bans. This will be easily worked around by sophisticated users.
But even if it did work 100% of the time, creating a Large Language Model is now in the realm of the hobbyist. A few huge chips, a few thousand hours of compute time, and you too can have a somewhat primitive version of Grok. What then?
Every few years, governments double down on their pledge to “regulate” the internet: be it misinformation, extremism, pornography, etc., only to find that the tools mutate faster than the laws they write. After the 2017 Manchester bombing, ministers vowed to force platforms to remove terrorist content “within hours”. Instead, that material migrated from Facebook and YouTube to Telegram and smaller encrypted services, where it remains accessible.
In this instance, Musk’s X has created a new raft of problems for regulators. Not only is the right of use indistinct, but the question of authorship is vague. Did Grok make the image — or did the user who prompted it? A previous version of this paradox was whether a website with forums is a “publisher” or merely enables self-publishing by users. That legislation led to a range of weird outcomes; the closure of a hamster appreciation forum under the Online Safety Act was not the least of them.
To square this circle, major countries have taken a different approach. The French system tries to deal with the image explosion via privacy laws: a right to quietude. Germany’s constitution prioritises “human dignity” above all. Britain is trying to deal with the question through a “harms” approach.
None encloses the question of who owns a likeness. Kendall and the Ofcom apparatus can respond by expanding regulation and legislation to fit every use case. But every new cut or shaping is an attempt to forge a rule that doesn’t exist: you don’t get to control your image. And no parliament has the moral or even physical power to reverse that presumption. AI hasn’t changed the calculation. It has merely removed the illusion that a reckoning could be postponed.







Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe