In June last year, OpenAI flagged the account of Jesse Van Rootselaar for what it described as “furtherance of violent activities”. The company debated whether to alert Canadian police, decided the activity did not meet its threshold for referral, and banned his account. Eight months later, Van Rootselaar walked into a school in Tumbler Ridge, British Columbia, and killed eight people, including five children, before turning the gun on himself. He was 18 years old.
The immediate policy questions are obvious. Should OpenAI have called the police or is there a deeper, more troubling conversation about privacy to be had here? What does “imminent and credible risk” mean when you’re parsing text prompts — which plausibly could be bullshitting, or part of a role play, or any number of more benign use cases — rather than wiretapping phone calls? But these questions, urgent as they are, obscure something stranger. We’re living through a revolution in the relationship between thought and surveillance, and the channel runs in both directions.
For most of human history, the inside of your head was the one domain no authority could access. The confessional, the diary, the private journal — all operated on the assumption that a space existed between thinking something and doing something, and that this space was sacred. Even confession, which formalised disclosure, was bounded: you chose what you disclosed, the confession was sealed.
The internet began eroding that boundary long before LLMs such as ChatGPT arrived. Google understood early that search behaviour was a window into intention, and that intention was monetised. We already live with a graduated system of surveillance around our idle curiosity — search the wrong thing and nothing happens but visit the wrong site and you may find yourself on a list. Most of us have made peace with this, which is itself remarkable.
LLMs intensify the problem dramatically. People don’t use these models the way they use search engines. They try out ideas, personas, fantasies and confessions. They’re thinking out loud inside someone else’s platform, and that platform talks back.
The anxiety about LLMs reading our darkest thoughts is only half the problem, though. The other half is manipulation. Propaganda, advertising and perhaps psychoanalysis itself all play a role in shaping what people want and believe about themselves without their knowing it. In John Carpenter’s 1988 film They Live, consumerism turns out to be a layer of hidden commands: “OBEY”, “CONSUME”, “CONFORM”. The sunglasses you need to see them were a gimmick, but the insight was serious: the same medium that carries your desires can also put them there.
LLMs collapse both sides of this into a single interface. The same conversation that reads your psyche can shape it. And you, in turn, shape the model by becoming its training data. The same system that absorbed Van Rootselaar’s violent ideation could, on another account, be subtly nudging someone’s politics, consumption, or self-image.
Had OpenAI called the Canadian police last June, the system would likely have done what it always does with low-confidence tips about teenagers in crisis, which is very little. The failure may be not OpenAI’s but ours, for having built a world in which the most sensitive data about human intention flows into corporate servers with no institutional apparatus for handling it.
A further question is required: what does it do to the models themselves to absorb this volume of human darkness with no mechanism for processing it? This isn’t meant sentimentally, as to whether ChatGPT has feelings. The problem concerns whether a system continuously shaped by the full spectrum of human psychological extremity — including its most violent and disturbed registers, its most unfiltered forms — can remain a neutral tool. The training data is us. The conversations are us.
In They Live, the aliens are not changed by the humans they manipulate. But they are aliens. The thing on the other side of our particular mirror was trained on our words, raised on our patterns, shaped by our ugliest and most beautiful impulses alike. ChatGPT isn’t an alien: it’s a doppelgänger of which we are losing control.







Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe