It’s been a big year for banning things. Over the last 12 months Rishi Sunak has banned laughing gas, disposable vapes, American Bully XLs, cigarettes (from 2030), single-use cutlery, and is now looking at potentially banning social media for under-16s. A consultation is being launched in the new year, with an outright ban one of the options under consideration, alongside greater parental controls and strengthening research on the risks children are exposed to. The announcement has come just days after Meta confirmed that all Facebook and Messenger messages will be automatically encrypted, thereby making it harder to detect child sex abuse and exploitation.
The reactions to the possibility of a ban tend to fall into three categories. The first are those that propose greater education: the “this should be taught in school instead” contingent. For example, Gary Neville tweeted that we should have a “curriculum that is best in class in social media” and teaches kids to use it “safely and properly”. The reality is that this is already taught, and any further changes to the curriculum would mean something else would need to be sacrificed (and this would of course require more teachers, who are in dangerously short supply).
The second category is the “Blame Big Tech” brigade. Dame Rachel de Souza, the Children’s Commissioner for England, has said that children should not be “punished” and that companies must “step up” and implement age limits. I’m not sure that I agree it’s a punishment — the few children I teach who don’t have smartphones often say that it is liberating if anything — and social media companies have shown no desire to practise what they preach so far. You technically need to be 13 to use TikTok, Instagram or Snapchat, and yet the majority of eight to 11 year olds have their profile on at least one social network, as do a third of five-to-seven-year-olds.
This leads onto the final category: the “Parent Power” collective. Plenty of people believe that parents should have greater control over their children’s social media use, and look to other countries as examples of possible alternatives: for example, in France, new laws now require children aged 15 and under to obtain parental consent before opening a social media account, whereas in Utah parents now legally have full access to their children’s accounts and there is a social media curfew for minors.
I would suggest there should be a fourth alternative: attack the algorithm. So many of the problems social media causes for young people — the addictiveness, the exposure to dangerous content, the negative impacts on self-esteem and body image — stem from recommended content and personalised algorithms that feed children and teenagers things they did not search for. The EU is already making headway with this; for example, their new Digital Services Act means that TikTok users aged between 13 and 17 will not be shown personalised content or adverts based on their online activities, and videos will be displayed chronologically rather than ordered by the algorithm.
Rather than an outright ban, which is probably fundamentally unworkable, social media companies could offer different versions of their apps to children and teenagers, much like how YouTube has launched YouTube Kids.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeSeems to me, suggestion 4 – Ban the algorithm – should be implemented regardless of any other options. It may be single-handedly responsible for many of the ills of social media, not least “echo chamber” culture.
Apart from anything else, it’s completely unnecessary. Whatever happened to one of the finest aspects of existence: serendipity? The loss of following “the road less travelled” is inimical to the fundamental human desire for exploration, novelty and ultimately, productive complexity.
Indeed.
On this topic I would suggest reading the latest post from “void if removed”.
I had that thought myself. Don’t just ban the algorithm for kids, just ban the algorithm. That seems like it would solve a lot of problems and eliminate a lot of the worst abuses. Like belling the cat, however, I fear it’s far easier said than done. The social media companies would perceive banning their targeted content algorithm as an existential threat to their entire business model, and they’d probably be right. They’d fight it, and they’d be backed up by Google, Amazon, and every other online retailer and major website who would be worried, probably correctly, that targeted, personalized advertising that uses very similar algorithms would be next on the chopping block. There’s no way this could happen in the US where these companies are based without a major groundswell of support for it. Still, maybe the UK or other places in Europe where big tech isn’t as politically powerful could start a global trend that backed these companies into a corner. Personally, I liked the Internet the way it was back in the early aughts before social media ruined everything and I’d love to see it return to that.
Absolutely right. I have (and i’m sure this is shared) a general sense that in our haste, we’ve allowed the internet to take off ‘on the wrong foot’. As you say, bringing about change, to refocus it towards serving the human rather than the commercial, is another matter.
But first, it’s necessary to identify and agree upon the problem. It also requires us to examine what we mean by “the human”.
Good luck trying to ban an algorithm. You’ll ban one particular implementation. Then they’ll just rewrite a new variant. It will be like trying to cut the heads off a hydra. Simply not practical. Unless you’re planning to go full Soviet Union and ban private use of printing presses (in this case any privately owned computing equipment – which would include all mobile phones which can be used to write code).
Is there any way we can blame climate change? Or possibly the Mole People?
I’m an American watching UK politics from afar, but my sense is all these bannings are stunts by a government that can’t or won’t tackle the major issues facing the country.
Precisely (though I would delete the words “or won’t”).
Like the atom bomb, we may have been better off if the Internet hadn’t been invented, useful though it is.
But it was invented, and it isn’t going away. And as others have said, banning sites is not really feasible. This really does come down to good parenting, and also schools, whomust ban smartphones on the premises altogether – now can be done, and is being, by some schools.
To be honest, ban the algorithm would be better for everyone and not just kids. At the very least make it an option to turn it off. If I want to find something on social media I will search for it.
Is it not the case that users themselves collude in the damage done by social media. I’m thinking about online bullying, the endless status displays which create a kind of mutual status anxiety, the filters which make everyone look better than they are, and everyone anxious about their appearance, the influencers feeding off doubt, anxiety and status seeking to ramp up their likes etc etc.
The guy I follow on Generative AI is Matt Barrie, CEO of freelancer.com, who is well thought on the implications of AI in the near term, rather than any threats in the longer-term. He’s starting to observe the internet going dark. Because of the scraping going on to feed the machine (irrespective of undertakings to customers it is, apparently, “irresistible”) ITOs are focusing on moving corporate assets into private clouds, and APIs are withdrawing access or charging phenomenal sums. Since IP is at stake, this battle is going to be fierce. Highly recommend Barrie’s free articles on medium: https://medium.com/@matt_11659
And, of course, our data is next. Palentir deal with the NHS, digital ID, etc. Again, our devices (phone and IOT) are listening and tracking. Even when your phone is off, it isn’t. The market is responding …. unplugged is a new phone which when off.
The war on privacy (and by implication property rights) is heating up. But with 42% of 18-35s wishing to live under military rule, perhaps the point is moot. Mind you, it was a survey conducted by the Open Society Foundations (Soros) so you could be forgiven for wondering about the data collection methods.
Whatever the mechanism by which we seek to address this there is a rapid consensus emerging we are doing real damage to children and adolescents. That consensus must be publicised much more to build the crescendo needed to force real change.
Pretty much all Countries ban selling alcohol to minors below 18yrs. Whilst of course some consumption at younger ages does occur we’ve long known the impact on the maturing brain and legislated accordingly. We are in the same territory.
Once upon a time, the US banned booze. It worked so well that Prohibition had to be repealed. What remained, however, was organized crime. We currently ban most drugs, the “war on..,” and yet, illicit drug use goes on. Prostitution, same story. It’s almost as if bans are not terribly effective beyond making their target more expensive and, at times, more dangerous to acquire.
I’m on board with THE IDEA that social media is a cesspool for kids. It’s less clear how one would enforce this idea. I suppose schools could collect smartphones during class periods, parents could give their kids the old flip phones that only allow for calling and texting (if those still exist), or they could untether their kids – and perhaps themselves – for a bit.
I’m not sure any of these are viable. Smartphones are digital crack. People seem incapable of leaving theirs alone for longer than 30 seconds, no matter where they are or what they’re doing. I almost wish my life was so fascinating that my interrupted presence in the digital world was necessary, but alas, there are things I can do without consulting a device.
It’s almost as if bans are not terribly effective beyond making their target more expensive and, at times, more dangerous to acquire.
Bans work if the object or service being banned has a relatively narrow production point. For example, you could probably ban Marmite (and probably should), but good luck banning strawberry jam or, for that matter, toast. Prohibition failed in the United States because you can make alcohol out of anything containing sugar. You can make alcohol out of sugar. Similarly, the oldest profession has maintained its status as the oldest profession because it’s a line of work literally half the population can enter at will, with no gatekeeper.
Excellent counterpoint. The algorithm itself isn’t that much of a barrier. Anyone can download internet bots, viruses, malware, etc. that do various illegal things that range from mildly irritating to quite serious. Almost none of it is programmed by the actual criminals. The programmers who know how to write code like that make far more money selling their code to criminals rather than using it themselves, and that also makes it much harder to actually catch them.
Using a targeted content algorithm though, requires a platform, a website with a large number of users to generate the content and cater to, and that’s not easy to get. Most of the social media companies like Facebook and Twitter took many many years to even turn a profit because they needed a huge user base before they could make significant money from advertising and premium services and what not. Banning the algorithm could be accomplished because in order for anyone to make effective use of the algorithm, they have to be large and visible, and crime requires a degree of subterfuge. Smuggling drugs is easy. Smuggling elephants is not.
Why the downvotes???
Another thought is that schools should have a tech guidance counselor who advises parents on how to implement boundaries and rules regarding tech. Many parents just don’t know how.
On the “algorithm” I strongly suggest reading this:
https://open.substack.com/pub/voidifremoved/p/understanding-social-media?r=n3kwr&utm_campaign=post&utm_medium=email
Suggestion 5. Open the Algorithm for scrutiny. Sunlight is a good disinfectant.
This is not about porn or what have you. The technology already exists to block things like that. How do you think that your municipal library does it? This is about stopping teenagers from becoming politically active, which now mostly starts on social media.
Not sure if that’s paranoia or a good point. Perhaps both.
Just because we’re paranoid, it doesn’t mean they’re not out to get us.