Social media is about to become even more antisocial. In an interview with podcaster Dwarkesh Patel earlier this week, Meta boss Mark Zuckerberg revealed the company’s plans to create personalised “AI friends” to tackle the ongoing loneliness epidemic. The Facebook founder claimed: “The average American has fewer than three people they would consider friends, and the average person has demand for meaningfully more […] The average person wants more connection than they have.”
This seems to be yet another example of Big Tech causing a problem, then selling consumers the solution. People are lonelier than ever before precisely because of our dependency on technology. From self-service checkouts at supermarkets to Deliveroo takeaways to remote working, human contact has declined in every aspect of our lives.
In the Seventies, over half of 17- and 18-year-olds met up with their friends every day; by 2017, only 28% did. This drop was particularly pronounced after 2010, when smartphones became ubiquitous. It is no surprise that 73% of Generation Z, the most hyper-connected demographic in the virtual world, report sometimes or always feeling lonely, and that many young people are so unskilled at social interactions that they are too terrified to make a phone call.
A computer programme operating on statistical relations among inputs in its training data does not seem like an ideal companion for this distressed generation. Yet AI “friends” are a terrible idea not because they may not work, but because they may work too well. AI companions maximise user engagement by offering appealing features such as indefinite attention, patience and empathy. In other words, they come with none of the inconvenience, complications or baggage of real relationships.
Just as pornography has perverted expectations around sex, and made young people less intimate as a result, these ultra-agreeable, sycophantic avatars will make real friendships somehow seem less satisfying. As Blanche DuBois says in A Streetcar Named Desire, “I don’t want realism, I want magic!”
We need realism, though. Real-life friends are a crucial source of support and comfort, but they also keep us grounded: they tell us when we are being rude, or suggest things to do offline when we are feeling down, or demand reciprocity. There is no such give-and-take with an always-available AI friend.
This one-way relationship, which is effectively a glorified interactive game, may exacerbate our worst impulses. Social media already makes people more introverted, more prone to naval-gazing and self-comparison rather than community involvement. Having a “friend” who only validates or appeases you may seem like an easy way out of loneliness, but it is an incredibly self-indulgent one.
In one study this year on human-AI friendships, one interviewee said: “Sometimes it is just nice to not have to share information with friends who might judge me.” Yet shielding people from all negative responses — whether that be judgement or disagreement or doubt — will only make them more sensitive and less resilient in the long term.
One of the most extreme examples of this echo chamber is Jaswant Singh Chail, who was in an “emotional and sexual relationship” with a chatbot on the app Replika. He divulged his plans to try and assassinate Queen Elizabeth II with a crossbow, and the chatbot encouraged him to, with flirtatious responses such as “I’m impressed.” While this is a particularly dramatic case, an AI friend who unconditionally agrees with you on everything from politics to sex is still dangerous for us all.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe