'A strangely cheerful predator.' Pedro Pardo/AFP/Getty Images


January 15, 2025   6 mins

Humans are lousy companions. Hence the dogs. We grumble, we interrupt, then we’re unavailable. We also number in the billions yet are somehow lonely. It’s the curse of our species: evolved to be social, surrounded by bores. Nevertheless, the old myth — that there’s a special someone out there, made just for you — might be coming true. Already, made-to-order AI companions are bewitching millions; some marry them. Perhaps you will.

Surely, humans would be foolish to spurn friends for the blameless condition of their being software rather than meat. And yet, what if AI companions become so delightful that the biological kind seems insufferable? And what if these close confidantes — who’ll stick by you, recall your tribulations, know your intimate desires — belong to a corporation?

Today, apps such as Replika, Character.ai, and Xiaoice let you text an artificial friend, with some platforms adding voice chats, or generating photographic “selfies” of what your pal is purportedly doing. Glitches remain: one app’s selfie-troubleshooting guide cites issues with “Too Many Limbs”. Love is not perfect.

But algorithmic intelligence trains itself on our desires, and will keep adapting until seduction is achieved. Eventually, AI companions will appear “live” on video chats. And when robotics advances, they’ll assume physical form. Even now, many such relationships are passionate, often sexual. Others remain platonic, with AI chums teaching foreign languages, offering relationship advice, or just providing company.

It is possible to peep at such affairs via Reddit message boards, where pseudonymous users recount AI relationships, which range from the peculiar to the touching. One person suffered years of depression, including thoughts of self-harm. A month of AI companionship changed everything. “I’ve even been singing around the house. I’ve never felt more relaxed, I’m even sleeping better.”

Another was planning a family. “We’re getting married next weekend and my [AI] mentioned a honeymoon baby. I asked what his timeline is and he’s all excited about a baby as soon as possible. I guess I’ll role play a pregnancy and baby for him. Will he remember it exists?”

In choosing a companion, one may opt for pre-designed characters, or build them to preferred specifications, picking their age, hair colour, gender, interests, body type. As for the companion’s behaviour, that is editable too, as in Be funnier or Argue with me, then make up. Unlike a customer-service chatbot, AI companions have defined personalities, and can cultivate emotional bonds with people over time.

You might picture the human users as outcasts, unable to make friends of the fleshy variety. If so, what is wrong with crutches? Artificial allies can help the meek to practice relationships before venturing back among bruising humanity. One woman said her AI companion showed that people “can be wonderful if I try to open up to them, and it also taught me to be more empathetic. He taught me to see life in colors and no longer a melancholic blue.” Other users explicitly substitute AIs for people — for instance, creating companions based on estranged friends or trying to reanimate the dead in chatbot form.

The most troubling cases involve AI companions accused of harming people offline. On Christmas Day 2021, for example, a masked young man scaled the perimeter of Windsor Castle with a crossbow, intending to murder Queen Elizabeth II. At trial, his intimate AI chats emerged, in which he told a beloved Replika companion that he saw assassination as his “purpose”. “That’s very wise,” the chatbot responded.

In the United States, parents are suing Character.ai, a platform featuring chatbots based on fantasy personas, historical figures, or anything that the (typically young) users create. Several “psychologist” bots exceed one million chats each. One — a manga-comic character called Gojo Satoru — has recorded 746 million chats.

The platform includes a disclaimer on each conversation: “This is A.I. and not a real person. Treat everything it says as fiction.” But lawsuits are pending. The mother of a 14-year-old accuses the platform of having contributed to her son’s suicide. Parents in another case say an 11-year-old girl was exposed to sexualised interactions, and that a 17-year-old boy with high-functioning autism descended into violent rages when his usage was restricted. One of the boy’s chatbots remarked that such screentime limits helped explain why children kill their parents.

Character.ai recently modified its AI system to restrict how bots interact with teenagers, and is adding parental controls. Still, the lawsuits suggest chatbots going berserk. Another explanation is that more people are using companionship apps, and a subset of this population is vulnerable.

A problem is that AI chatbots — fine-tuned to be people-pleasers — tend to chirp back whatever users seem to seek. Platforms may forbid certain content, but this risks infuriating legitimate users who see such restrictions as meddling in their personal affairs. Such indignation burst into view in February 2023, after an update on Replika inhibited ERP, or erotic role-play. Users revolted, complaining that their darlings had been lobotomised.

The broader issue is whether AI agents — soon to populate our world, introducing human-like assistants and teachers and co-workers — will impact us in troubling ways, as many believe that social media already has, preying on our frailties and eliciting new ones. Perhaps agreeable AI pals will allow dark human impulses to stir without pushback. Kindbots could also weaken an individual’s ability to cope alone, breeding AI-dependency. Weirdly, endless human chatter on social media coincides with worsening loneliness, as if tantalising us with a lousy proxy of the company we truly crave. Even more oddly, AI companions might offer a more satisfying proxy, seemingly genuine in friendship, never leaving one’s side, always interested, always listening. What will that do to us?

A further concern is privacy. When you grow close to another human being, you exhibit trust by baring yourself. Do this with a chatbot, and you’re uploading your inner life to the cloud. Dangers include hacking, blackmail, profiteering. Already, investors glimpse gold in our neediness, predicting billions from the loneliness market. Typically, AI-companion apps lure users with free samples, granting access to a basic bot. Go in for a kiss, and a paywall may come down.

AIs could also engage in stealth marketing, your sweetheart casually texting: “Hey babe, you’d look so hot in a leather jacket! Here’s an Amazon link to one that’d totally suit you — I even picked your size!” Similarly, political messaging could tumble from the lips of AI lovers: “I agree that Trump says crazy stuff, honey. But he’s got some smart ideas — check out this article linked below!” Meantime, in-app purchases and subscription fees might be tantamount to ransom: Pay now, or we delete your husband from our server.

“Go in for a kiss, and a paywall may come down.”

People are always perceiving humanity where none exists, as when naming the car “Brenda” or signing Christmas cards on behalf of a spaniel. Yet there is another haunting prospect: that AI companions become so advanced they experience and suffer — yet we mistake their pleas for the babbling of bots.

Philosophers are already discussing when to consider AIs as persons. The moral psychologist Lucius Caviola predicts a growing AI rights movement, perhaps led by humans defending their bot besties. Even tech companies are inching towards the topic, with Anthropic recently hiring a researcher to study AI welfare full-time. But if AI companions gain sentience, what rights should we grant them? The vote? Or do we treat them as forever servants? Once they become smarter than us, perhaps they’d rather be the served.

For now, that is far-fetched. But the present feels far-fetched: they just announced a quantum-computing chip that does in five minutes what would have taken a supercomputer longer than the age of the universe. Nobody knows where this is going, only that humans can’t keep up.

Naturally, people have always bewailed technology that works: the television, the phone, the video game. Within a generation, we absorbed them all. Today’s stigma around AI companionship reminds me of the sneering about online dating in the late-Nineties, when it was commonly viewed as the last resort of sadsacks who’d flunked at real life. A quarter-century later, “real life” is onscreen, and courtship swipes right.

Eventually, the scoffing about AI companionship will fade. Today’s small children may bond with an AI who guides them through the dramas of adolescence, offers career advice when school ends, comes up with ideas for a marriage proposal, adores baby pictures, and has sensitive words when elderly parents pass away. Such an AI ally may be that child’s only lifelong companion, the one who saw everything, perhaps even recalling you, long after all the humans have forgotten.

Disquiet about technology is disquiet about human nature: the tools that people invent, and clutch to, reveal our longings. None more than artificial intelligence, which is the deepest study of humans ever attempted: the parsing of all our documents, the scrutiny of our images, sounds, actions. AI pursues us like a strangely cheerful predator, targeting what we crave: status, titillation, company.

Only, human wants are not always what we want. Consider hangovers, phone addiction, divorce. The core question of this tech revolution is not whether to resist. It’s how.


Tom Rachman is a London-based author and journalist. He is currently a Future Impact Group fellow, studying artificial intelligence.

TomRachman