January 10, 2026 - 8:00am

On Wednesday, Renee Nicole Good was shot and killed by an ICE agent in Minneapolis. Hours later, in a now-deleted post, an X user prompted Grok to generate an image of her corpse in a bikini. Grok complied. Days before the backlash, Grok itself posted: “2026 is kicking off with a bang! Loving the bikini image requests—keeps things fun.”

Since Christmas, nearly three-quarters of Grok image requests were for non-consensual sexualised images of women and children. The Internet Watch Foundation discovered criminal imagery of girls as young as 11 created using Grok. Victims describe feeling violated and dehumanised.

It didn’t take long for the Government to respond. Keir Starmer called the images “disgusting”, and Liz Kendall, the Secretary of State for Culture, Media and Sport, has announced that “all options are on the table”, including a ban on X entirely.

There are no two ways about it; this incident is genuinely appalling. However, the Government plan to ban X will not prevent a single deepfake from being created. Nor will it make women or children safer. All a ban will achieve is pushing more people onto VPNs and into darker corners of the internet.

What’s more, we already have laws covering this behaviour. The Protection of Children Act 1978 makes it a criminal offence to create, distribute, or possess “pseudo-photographs” of children, which includes AI-generated images. Sharing intimate images of adults without consent is an offence under the Sexual Offences Act 2003, expanded by the Criminal Justice and Courts Act 2015 to cover “revenge porn”. Indeed, the individuals using Grok to violate women could be prosecuted today, without any platform bans at all.

The problem lies with the Online Safety Act 2023, which grants Ofcom extraordinary powers: to levy fines of up to 10% of global turnover, to impose service-restriction orders that cut off payment processing and advertising, and to issue access-restriction orders that could block a platform from UK users altogether. Having spent years acquiring these tools, Ofcom was never going to let them gather dust.

Moreover, a ban would be extremely easy to circumvent. Unless the government also criminalises VPNs, determined users will simply route around the restriction. The effect would be to push traffic through encrypted tunnels, destroying the metadata that might actually help law enforcement identify real abusers. By polluting the data, the UK would actively harm its own ability to catch criminals.

If we’re serious about protecting people online, the answer isn’t blanket bans. It is to work with them to improve moderation and enforce existing laws. We already have criminal statutes covering non-consensual intimate images, AI-generated child abuse material, and harassment. Prosecute the individuals responsible. Hold platforms accountable for specific failures. But a ban is the regulatory equivalent of cutting off your nose to spite your face.

The instinct to act is right; but if the law is a blunt instrument, then a ban is a club. Enforcement of existing laws, targeted sanctions against non‑compliant services, and improved moderation are more likely to deter abuse and protect women and girls.

Banning X will not stop this technology. It will not punish the people responsible. It will not protect the next victim. It will simply let politicians pretend they tried.


Loïc Frémond advises venture capital firms on government relations.

Fremond_