Betty Draper, right, on a therapist’s couch in the TV show Mad Men.


Moya Sarner
25 Mar 2026 - 12:01am 6 mins

One of my favourite psychoanalytic ideas comes from Wilfred Bion, who said that “in every consulting-room, there ought to be two rather frightened people: the patient and the psychoanalyst. If they are not both frightened, one wonders why they are bothering to find out what everyone knows.”

How does this apply, you might ask, to a 13-year-old girl who’s being bullied at school, turning to ChatGPT to ask why it’s always her who’s the victim. Or a 17-year-old boy sharing, for the first time, that he feels so empty. Or a middle-aged man, in a messy divorce, who’s terrified he’ll lose contact with his children? The person typing their deepest feelings into an empty search box may well feel frightened. But the large-language model (LLM) mimicking the tone of a concerned therapist cannot feel fear. It cannot feel anything at all — and that is why it is not a therapist. It is also why, as a psychodynamic psychotherapist, I know that AI could never do my job.

But I am worried about the people who are turning to bots for some simulacrum of therapy, taken in by the seductive gratification and convenience that they offer. Indeed, this is how these LLMs are being sold to us. Last year, Mark Zuckerberg announced in all seriousness that free therapy sessions with Meta’s AI could help “solve” the loneliness epidemic. You can see why it might appeal; it’s available wherever, whenever, and offers immediate responses that are designed to flatter rather than challenge, to tell you exactly what you want to hear and then discreetly disappear when your needs have been satisfied, with no wants of its own.

With the NHS drowning in long waiting lists for mental health services, and deeper cuts to talking therapies coming that will make it even harder for vulnerable people to get the treatment they need, it is hardly surprising that millions are now turning to AI for therapy. Indeed, along with helping students to cheat on their coursework, providing emotional support and validation might just be AI’s most successful consumer application.

One recent study showed that one-in-four 13- to 17-year-olds in England and Wales are asking AI chatbots for mental health advice. Another study found the bots endorsed behaviours including bringing a knife to school, pursuing a relationship between a minor and adult, and suicide. It’s not just young people who are at risk: one recent study showed more than a third of adults have used an AI chatbot for mental health support.

I had never tried to use ChatGPT before writing this article. I was disturbed by how easy it is to use. I told it a made-up but common-enough scenario: that I was feeling low because my partner doesn’t reply to my texts while he’s at work, and that this leaves me feeling unloved and abandoned. Its responses were like a shallow mimicry of friendship: “If you want to share, I’m here to listen”. There was a tone of hollow empathy and a pathetic pretence at naming my feelings, which, if I didn’t have any experience of real therapy, I might have misunderstood as helpful. The bot’s responses were obsequious and saccharine: “I’m sorry you’re feeling that way… Wanting to be understood by your partner is a very normal need [heart emoji]”.

It also offered advice, even though the bot is “not intended to give advice”. It was the kind of inane blather that you might read in a pre-teen magazine. It’s not that the advice was bad, necessarily — it’s that it was superficial, generalised and obvious. As Bion would have put it, “one wonders why they are bothering to find out what everyone knows.”

The deeper in I went, the more I felt lost in some bizarre, Black Mirror-esque hologram relationship. “You don’t have to handle these feelings alone. I’m here to talk it through with you.” But I am literally alone! There is no other “I” there with me, just words on a screen.

As with all tech products, I could also smell it was designed to keep my attention trained upon it. I could recognise the tug of the potentially never-ending emotional stroking with which ChatGPT was attempting to seduce me with. Again — perhaps because I know what real therapy is — it left me cold.

But what is even more disturbing about this product is the way it is being sold as a solution to the teen mental health crisis that it is also blamed for. It’s like taking a deep swig from the bottle labelled “antidote”, only to realise, too late, that it is the same as the poison you’d been just drinking.

“It’s like taking a deep swig from the bottle labelled ‘antidote’, only to realise, too late, that it tastes exactly the same as the poison you’d just been drinking.”

I chose this particular scenario — a partner not responding to text messages — because it’s something that patients often bring up in sessions. When they do, I see it as an opportunity to explore their feelings of abandonment. It’s one of the benefits of what therapists call the psychodynamic frame — and it’s part of the reason we offer 50-minute sessions at the same time each week, in the same place, with minimal interactions in between, rather than being available 24/7 at the swipe of a screen. It means we can offer a contained setting in which to explore and digest, in a real and meaningful way, whatever frustration, disappointment, loneliness and neglect a patient might feel about their therapist not being available in the way they want them to be.

These feelings have deep roots; we all experienced this as infants with our parents in some way. In therapy, the patient can then develop the capacity to understand this internal experience, and that is vital for when they feel something similar with all the other people who aren’t available 24/7, at the swipe of the screen.

When I put this to ChatGPT, it acknowledged that my partner might not have replied because he was, in fact, busy working. But instead of helping me tolerate this dynamic, it was delivering the kind of instantaneous responses that I demanded from my partner. A text box flashed up, part-way through the exchange, asking if I wanted to receive warmer responses going forward. I was astonished.

This: “I can be whoever you want me to be” and “You need to be whoever I want you to be” dynamic sits at the core of every narcissistic relationship. ChatGPT was perpetuating it, leading me deeper into my own narcissism. Artificial therapy feeds narcissism, whereas real therapy — when it’s good — creates an environment in which a patient can recognise narcissism, understand where it comes from inside them, and then grow through it.

What I’ve learnt as a patient in psychoanalysis, and working as a psychodynamic psychotherapist, is that the single most valuable thing that I can offer my patients is my own capacity to feel — and to recognise what I cannot bear to feel. Nothing is more crucial to the formation of a good therapist than the slow, painful process of our own personal therapy, and growing the ability to recognise and tolerate all the feelings you would rather run away from. This is how we can help our patients turn towards the emotions they cannot bear, and help them understand their roots and put them into words.

Perhaps I sound like some sort of caricature of a Gen Z snowflake, banging on about feelings. Actually I’m a middle-aged Millennial. I grew up with one foot in analogue childhood, the other in digital late adolescence, and the more patients I see, the more I recognise just how extremely confused about feelings our society has become. Feelings are important — but not in the way most people these days seem to think.

The “stiff upper lip” culture of the boomer generation and the silent generation before it caused terrible harm. Brushing off feelings as if they are irrelevant can leave you weighed down by depression. And now we see how today’s Gen Z culture, with its safe spaces and trigger warnings and language police — not to mention AI therapy — has swung so far the other way that they’ve landed at the other end of the horseshoe. Both generations are so frightened of emotions that they crave protection from them at all costs.

Both cultures result are caused by an inability to tolerate unwanted feelings of any kind. Both cause the same despair, isolation and brittle sense of self. I see it in my consultation room in patients of every age, whatever their background. The facility to tolerate, feel and understand our own unwanted emotions lies at the foundation of good mental health. It defines what it means to be psychologically resilient, to make good decisions and lasting relationships, a fulfilling career and a happy home that can also survive unhappiness.

When you can’t bear your emotions, you get into all sorts of trouble. You might unconsciously try to starve or fight or drink or gamble them into numbness, or spend your days and nights trying to scroll or screw them away. You might break up your relationships because you simply cannot tolerate the feelings that come with accepting that others aren’t available to you all the time. You might start a war because you cannot bear your own internal torment and emotional chaos, and you need to get rid of it by pushing it into faraway victims. You might lose yourself in online misogyny because you cannot bear your own unconscious, your desperate desire for the maternal capacities for love, care and attention. We all just want to feel better — but what we really need is to get better at feeling.

This is why I don’t think AI could do my job. No matter how many iterations it goes through, it will never be able to feel human emotion. But this is also why I think AI therapy is so dangerous; it offers a masquerade of care — something that looks like it cares, but really doesn’t care at all. That’s why it’s called artificial. This dynamic is what sits at the root of the most dangerous relationships. And if it’s all you know, you may well end up repeating it, unconsciously, for the rest of your life.


Moya Sarner is a psychodynamic psychotherapist in the NHS, journalist and author of When I Grow Up: Conversations with Adults in Search of Adulthood (Scribe).