The Times reported this week that thousands of social media accounts connected to terror groups, including Iran-backed Hezbollah and the Houthis in Yemen, have been allowed to operate freely on mainstream platforms. While everyone can see what is posted on Instagram and X, it’s much harder to tell the extent of the problem on encrypted services such as Telegram and WhatsApp.
These accounts promote violent extremist ideologies and terrorism against Western targets, primarily against Jews. They go far beyond ideological preaching and incitement, and include practical information about how to build bombs and suicide vests. For example, one account on Telegram called “Electronic Hizbullah”, which boasts 224,000 followers, published a video of Israeli hostages being executed. Another example The Times also found freely available on X and Instagram was from a Bahraini terrorist calling on “all free people in the world” to “stab and kill every Zionist walking in the cities and neighbourhoods of our Arab and Muslim countries”.
Britain’s Online Safety Act (OSA) rightly classifies both hate speech and terrorism as priority harms. But you wouldn’t know it from regulators’ response to this content. This is not borderline material, nor is it ambiguous. It is direct incitement to terrorist violence, framed in explicitly antisemitic terms. Yet Ofcom, despite holding meaningful enforcement powers against terrorist content since March this year, has not acted. Its response so far reflects a lack of urgency and a worrying lack of ambition to ensure the OSA delivers what Parliament intended. Meanwhile, counter-terrorism police appear to be standing back.
The legal framework is there. Terrorist content and incitement to racial or religious hatred are already outlawed under the Terrorism Act 2006 and the Public Order Act 1986. According to the OSA, platforms must take proactive measures to prevent such content from appearing and spreading. And Ofcom has the power to fine them if they don’t.
Here, the issue goes further than the law itself. It’s also about regulatory will and political priorities. Despite the Act’s tough language, the regulator has so far been slow and cautious, often citing the need for “proportionality”. The Department for Science, Innovation and Technology’s influence in softening the implementation of the Act has tilted enforcement towards platform protection and the financial cost to services, rather than public protection.
The impetus on platforms to remove terrorism content ought to be so strong that instances like these are prohibitively costly. When the stakes are this high, one could argue that it is fair to limit profit, rather than putting the public at increased risk of extreme violence.
What happens in the virtual realm influences what happens in real life. A recent study found that since 7 October 2023, there have been on average eight premeditated antisemitic terror plots and attacks per month in the US, accounting for a quarter of all hate-related terror and plots. Last month, Israeli Embassy staff Yaron Lischinsky and Sarah Lynn Milgrim were shot dead outside a Jewish museum in Washington DC. The perpetrator shouted: “free, free Palestine.” In Britain last year, a Jewish security charity recorded a record number of antisemitic incidents, with damage and desecration to Jewish property rising by 246% on the previous year.
And yet this online coordinated incitement by established terror organisations against Jews is still being treated as a fringe concern. When calls to kill Jews are openly hosted on major platforms, the threat is no longer hypothetical: it’s operational.
Britain cannot claim to take national security seriously while allowing this content to circulate with impunity. Services want to maximise profits, not safety. Meta and YouTube have announced plans to reduce human moderation, which will only make the problem worse. The laws at the Government’s disposal are our last line of defence; what’s missing is the will to use them. We must prioritise national security and public safety over the profiteering libertarianism of the tech industry and the malevolent designs of terrorists.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe