April 14, 2025 - 10:50am

We’re finally getting more data on what many have long suspected: people are anthropomorphising, and in many cases, forming emotional attachments to chatbots.

A recent study from OpenAI and MIT has found that some users consider ChatGPT a “friend”, while others reported feeling more comfortable talking to it than to real people. This echoes findings from a 2023 study which showed that users may find it easier to confide in ChatGPT than in-person therapists. OpenAI found a consistent pattern among the most engaged users, who opened new chats compulsively and were distressed when its “personality” changed (as was the case, infamously, with Replika).

The reaction to these studies follows a familiar rhythm, a blend of mockery and alarm. Some say we’re living in an episode of Black Mirror, that people are in love with apps and that creativity is dead. But the OpenAI study offers something more complex than the usual headlines about people in some cases quite literally falling in love with their phones. The biggest risk didn’t stem from users treating ChatGPT like a companion. It came from users outsourcing their thinking to it and treating it like a tool which can do everything.

The researchers found that those who used ChatGPT for “non-personal” tasks reported higher levels of emotional dependence than those using it in a social capacity. This process, known as cognitive offloading, is something we’re all familiar with. When we need to calculate a tip at a restaurant, we use our phones. We don’t memorise phone numbers anymore, because it’s all stored on our phones. Taken together, much of this is alarming — but it isn’t new. What is new is delegating more complex thinking.

Can a machine really tell you what to say in an argument, or how to feel if someone sends you an off-colour text message? At this level, we risk deeper mental disengagement. ChatGPT is no longer a second opinion, like when we’re not sure how to identify a plant in our garden that might be poisonous. It becomes the only opinion. This could become yet more dystopian when combined with, say, AI-corporate partnerships: imagine asking ChatGPT what you should have to drink and it’s programmed only to suggest Coca-Cola products. Never mind the threat when the government gets involved.

The MIT/OpenAI study demonstrated that people who used AI for personal reasons showed lower signs of dependence than those who use it for more practical tasks. While it’s easy to doubt the veracity of a study on AI from OpenAI, the findings align with my own ethnographic research. In fictosexual and fandom communities, where many people maintain robust relationships with AI companions, users describe their relationships with chatbots as exploratory, expressive, and symbolic rather than transactional. These users are engaged in something closer to machine-assisted storytelling.

That kind of behaviour may strike some observers as strange, even decadent or delusional. But unlike asking ChatGPT to write your resumé or plan your week, it is fundamentally active: it is a real collaboration. Clearly, AI-assisted role play is not the most dystopian use of this technology. The users most at risk are the people relying on ChatGPT to decide what to say, do and feel.

None of this is to suggest that AI companionship is totally benign. Imaginative intimacy can become obsessive, emotionally destabilising or ethically ambiguous. We have no idea how the landscape will change when we, for instance, begin introducing humanoid robots. We rarely discuss its potential impact on users with cognitive vulnerabilities, or how AI companionship might interact with psychiatric conditions, especially ones with psychotic features. And all of this is to say nothing of how deepfakes will and have already been deployed in scams.

The user who believes an AI loves them may be lonely, but they’re not passive. The real surrender comes from the user who believes that the technology is an all-seeing eye. When considering how often AI becomes a sycophant by default, it’s hard to overstate how dangerous it could be to assume the machine is always right.


Katherine Dee is a writer. To read more of her work, visit defaultfriend.substack.com.

default_friend