X Close

Forget San Francisco — Britain has a shoplifting epidemic too

September 7 2023 - 7:00am

San Francisco’s shoplifting epidemic is shocking to behold. But we shouldn’t imagine that the same couldn’t happen here. In fact, we’re well on our way. According to the British Retail Consortium, theft from stores across 10 UK cities is up by 26%. More, “incidents of violence and abuse against retail employees have almost doubled on pre-pandemic levels.”

On Tuesday, Asda Chairman Stuart Rose told LBC that “theft is a big issue. It has become decriminalised. It has become minimised. It’s actually just not seen as a crime anymore.”

In the absence of an adequate response from the authorities, retailers are beginning to take defensive measures. For instance, home furnishings company Dunelm is now locking up duvets and pillow cases in cabinets; Waitrose is offering free coffees to police officers to increase their visibility; and Tesco plans to equip staff with body cameras. 

The “progressive” response to this phenomenon isn’t quite as deranged as it is in in the US. Nevertheless, British liberals have responded as expected. A piece in the Observer is typical. You’ll never guess, but apparently it’s all the Tories’ fault: “Starving your population and then ‘cracking down’ on it for nicking baby formula or a can of soup can start to make a government look rather unreasonable.”

But as the writer ought to know, the issue here isn’t the desperate young mum hiding a few groceries in the pram. Nor is it the schoolboy pilfering the occasional bag of sweets. Rather, the real problem is blatant, organised and sometimes violent theft of higher value items. Criminals who never previously thought they could get away with it increasingly now do — thus presenting a material threat to retail as we know it. 

But instead of addressing the issue head-on, the writer blames the victim: “Once goods were kept behind counters, but since the birth of large supermarkets they have been laid out near the door, ready for the taking.” How terribly irresponsible of them! On the other hand, perhaps the open display of goods isn’t just a convenience for customers, but instead the hallmark of a high trust society. 

In fact, modern shops are a minor miracle of civilisation: public spaces, stacked high with products from all over the world, that passing strangers may freely inspect and handle, but which aren’t looted by anyone who feels like it.

Surely, that’s something worth defending. But if you’d prefer to abandon retailers to their fate, then don’t moan when they do what it takes to survive. Some will close, of course, and others will move their operations online. Those who stay open will guard themselves and their stock behind plexiglass and electronic tags. And then there’s the hi-tech solution: the fully automated and completely cashless store, in which customers have to be authenticated to even get in. 

Remember that retail facilities like this already exist. One day, when they become the norm, we’ll remember what shops used to be like. Then, we’ll ask why no one stood up for them.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

36 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

The FT’s AI optimism rests on shaky science

AI chatbots aren't nudging people towards the centre. Credit Getty

AI chatbots aren't nudging people towards the center. Credit Getty

April 5 2026 - 8:00am

Last weekend, the Financial Times published an article about the broader social impacts of AI. Studying, managing, and mitigating the negative impacts of the technological transition is supremely important, and doing so requires both the electorate and policymakers to be informed by good, impartial science. Unfortunately, for anyone with even basic training in social science research methods, the analysis by the FT’s John Burn-Murdoch was, at best, questionable.

His piece makes two claims. The first is that social media is inherently polarizing. According to the FT, these platforms “over-represent the radical right and left”, drive people to “conspiratorial beliefs” and political extremes, and are chiefly responsible for “waves of populism, polarization and an erosion of trust in experts, expertise and the establishment”.

The second, related claim is that AI chatbots — such as ChatGPT, Claude, and Grok — are the inverse, and will ultimately “nudge people away from the most extreme positions and towards more moderate and expert-aligned stances”. Burn-Murdoch labels ChatGPT, Gemini, and DeepSeek as “center-left”, and Grok as “center-right”. That he characterizes as “center-right” a model which famously referred to itself as “MechaHitler” and routinely peddles conspiracy theories about race and intelligence is probably worth interrogating elsewhere.

To justify the first claim, Burn-Murdoch references his own prior FT column. To justify the second, he conducted an experiment in which he created a series of “simulated users” (read: LLM instances), told half of them what they’re supposed to believe based on survey data and half not, and then had the simulated users discuss a range of political subjects with each of the mainstream LLMs. He then took the initial stated opinion of the simulated user, the response of the mainstream LLMs, and averaged them, applying an 80% weight to the initial position of the simulated user, and 20% to the LLM’s response.

He accounts for this approach by referencing a study jointly performed by the UK AI Security Institute, Oxford, LSE, Stanford, and MIT, published in the journal Science in December, on the subject of machine persuasion. Unfortunately, the study he cites does not even remotely use or justify any such methodology; instead, it tests the impact of human conversations with LLMs that are specifically tasked with persuading users. Nor do its results support an assumption of a universal 20% shift in user opinion in the direction of the model’s perceived base preferences. Rather, they evaluate multiple “persuasion strategies”, and then make an estimation about how far a model built for political persuasion could shift end user opinion.

This assumes, of course, that a user is open to being persuaded on contentious political subjects by a model, which can’t be generalized across the population to estimate aggregate societal impact. Moreover, where the cited article seeks to evaluate the impact of various persuasion strategies employed by LLMs on human users in an extended conversation, Burn-Murdoch seemingly just averages an LLM statement and simulated user statement together with an arbitrary weight, handwaving at “experimental evidence” as justification.

Additionally, the base claim that one can use an LLM trying to convince another LLM of an opinion as an accurate model of how an LLM could persuade a person remains entirely unjustified. Whether or not an LLM output indicates a genuine “change in belief” of that model or model persona is still an open question, particularly given models have a tendency to emphasize agreeableness and prioritize consensus. And while there is substantive interest in using simulated user behavior as a supplement or replacement for statistical surveys of real users, such methods are similarly unproven and methodologically tenuous.

But let us put even these concerns aside and return to the subject of convergent and divergent technologies of opinion. In the piece, Burn-Murdoch claims that LLMs will definitionally converge opinion, completely ignoring the well-trodden phenomenon of AI sycophancy, where models “excessively agree with, flatter, or validate” user beliefs. The fact that models broadly tell people what they want to hear, in ways that go far beyond social media “echo chambers”, in no way supports the claim that models will miraculously converge public opinion.

If we genuinely want to craft good AI policy, and have an informed debate on the role and impact of this technology on society, we cannot allow such shoddy work to pollute the commons, especially under the imprimatur of one of the most reputable and storied journalistic sources. Work like this risks marring that reputation when it comes to one of the most transformative issues of our era.


James Rosen-Birch is an organisational scientist, start-up founder, technologist and writer based in Toronto.

provisionalidea

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

2 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments