Sam Altman is the next profiteer promising liberation. Credit: Getty


March 27, 2025   6 mins

I went cheerfully through college listening to music on old file-sharing programmes while everybody around me had switched to iTunes. While the rest of the student body scrolled the web on WiFi, I dorkily plugged in an ethernet cable anytime I wanted to go online. But at some point towards my mid-20s, after being mocked for showing up to a hipstery job with a Lenovo laptop, I caved and went for Apple.

I can still, like in a recurring nightmare, remember the tone of voice of the techie who finished my purchase of my first iPhone. “Welcome to Apple”, he said, as if he were immersing me into a bath of smugness. If we were acting a scene, the subtext would have been completely clear: I was in now and would never get out again. AI firms are now inviting me — inviting us — in just the same way.

But I am not falling for it again. We have to decline. We have to collectively boycott artificial-intelligence products while there’s still time. Because AI is no good for us — no good for our minds, creativity, or competence — and as it gets jammed down our throats, we are the only ones with the power to refuse.

Looking back, I understand why I caved to Apple all those years ago. It really would have been social suicide to try to make my way in the 2010s professional class with anything other than a MacBook Pro and an iPhone. Apple really was better than anything else. But I regret that moment, all the same. Every intuition I had knew that opting in to companies like Apple, Google, and Facebook was a fool’s bargain.

The upside was a few years feeling like I was part of the future as I sipped my lattés and floated through the dawning post-industrial era with my sleek silver Apple gadgets. But I’m still paying the price for that: every time I log in to my bank account now, it’s like peeling barnacles off the hull of a ship to get rid of all the new charges that Apple and Google have concocted. Enshittification — the apt coinage by Cory Doctorow that turns out be, like, the word of the century — has revealed itself to be virtually a law of nature. Exactly as Doctorow analysed it, platforms inevitably “abuse … customers to claw back all value for themselves”.

As AI emerges as the must-have technology of the shiny new future, and everybody starts naming their ChatGPT bots and having philosophical conversations with them, I have decided — formally, as of writing this essay — to opt out. The train is leaving the station without me. I still have not downloaded an AI app of any kind to my phone. And if this was, up until now, owing more to torpor than high principle, then thank God for laziness.

AI can only lead to dependence on a technology, which really means dependence on Silicon Valley overlords looking to rip off their customer base. It’s exactly the same dynamic that we’ve all been experiencing and bemoaning for the last 20-odd years. And you can probably plug-and-chug all your own favourite statistics to tease out what that has meant: the average American spending close to five hours a day on his smartphone, while friendship rates have plummeted and teen depression, anxiety, self-harm, and suicidality have spiked.

If you prefer, you can focus on attention span and the finding that the mere presence of a smartphone in a test setting impairs results. As the Silicon Valley apostate and head of the Center for Human Technology, Tristan Harris, put it: “Tech’s race to the bottom of the brain stem to extract information is an existential threat — using our [attention] to upgrade machines is ‘downgrading’ humans”.

Yet it seems that AI is here to stay. Big Tech is all-in on it, and governments aren’t close to getting their act together for meaningful regulation, with President Trump, for instance, reversing Joe Biden’s modest executive order erecting guardrails. But there is still personal choice in the matter, and now is the moment for those who care about human creativity and self-reliance to draw a line in the sand. No, I’m not talking about abandoning tech altogether. But we can, for example, click down below Google’s AI offerings to look at actual links. And we can generally go about living our lives in our sad old way without the benefit of AI “personal assistants” or bot “best friends” or whatever it is that the new technology is supposed to deliver to us.

To be honest, this stance of mine does make me feel like an Amish buggy driver whipping my carriage horses a little harder while the bicycles go whizzing past me; like the weirdos you occasionally see on city streets labouriously tapping out a message into a flip phone like they’re a living museum. But let’s try to get outside the hype for a moment and think through what AI actually is.

“Now is the moment for those who care about human creativity and self-reliance to draw a line in the sand. ”

AI is basically text-predict combined with data mining. That’s it. It’s a super-Google that goes into the body of texts and rearranges the words into a very pleasing facsimile of a cogent argument. There’s no “intelligence” behind it, in the sense of a computer actually thinking. I’m not the first to be reminded of the story of Clever Hans, the horse that could count and toured widely at the turn of the 20th century.

What Hans was doing really was very cool: he would be given a mathematical question, and he would tap out the answer with his hoofs. And Hans was right more often than not. But this was not, as it turned out, because Hans had mastered a numerical system, but because he was sensitive to the crowd and could feel the excitement cresting as he approached the correct number.

AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation: we are pleased by the sight of a calculator imitating something of our mental processes. Our tendency, unfortunately, is to give AI a kind of epistemological deference, to believe that this mash-up of data represents some kind of authority.

Meanwhile, in the role of Hans’s trainer are the programmers who are influencing AI to give us whatever it is they want us to hear. The disastrous rollout of Google Gemini in 2024 really ripped away the curtain and showed us the strange little man playing with the knobs. Google’s corporate culture at the time happened to be very woke and, lo, the artificial “intelligence” started generating images of black Nazis.

But even if Gemini overtipped its hand, the same dynamics are obviously at play in every other AI model. The people pushing AI now are the same sorts who spent the 2010s promoting web 2.0 as a new vision of freedom and global connectivity, all while destroying traditional media and ripping off as much private data as they possibly could and cheerfully selling it to advertisers. By 2020, the freedom talk was gone and “fighting disinformation” (read: censoring wrong-thought) was in.

This recent history raises an important question: why in the world should we trust these people ever again?

To be sure, we are already fairly late in the game. Even if we don’t use products like Grok and ChatGPT, chances are, we use more rudimentary forms of the same technologies. We use a version of AI to get around and to translate our ideas into other languages. These are remarkable achievements. But they do infantilise us. I’m pretty sure that, if my phone were taken away from me, I couldn’t find my own way from my home to my place of work. And they hinder our self-motivation: I’m also pretty sure that I’ve been much slower to learn foreign languages because I know, at some level, that Google Translate makes it unnecessary.

Do we really want to accelerate the decline in general competence even further? And in exchange for what? Many people remain mystified as to what actual service AI is supposed to provide. The current selling point is that AI gives you a “personal assistant”, but to be honest, I wasn’t aware until Google CEO Sundar Pichai announced “a personal Google, just for you” that I needed a personal assistant. I feel perfectly capable of buying my own airline tickets and booking my own restaurant tables.

What I was aware of needing was meaning and focus in my life: I wanted, for example, to translate the ideas for novels inside me into actual novels. I wanted to be the best, most self-reliant version of myself that I could manage. As for a machine writing a novel for me in a matter of milliseconds — I have no idea how that could possibly generate authentic pride or produce anything other than a cavernous inner emptiness?

My hope for the past three years had been that AI would just sort of go away. In the Nineties, we expected ourselves to be overrun by clone armies (certainly, my sixth-grade science teacher gave many speeches to that effect). It didn’t come to pass. Could something similar transpire with AI?

That seems increasingly unlikely. For one thing, AI has already changed the face of war, and the AI arms race between the United States and China guarantees that AI will be a major presence in our lives. There was a brief moment in 2023-2024 when it was possible to imagine our AI anxieties going the way of Nineties clone fears. That was when Italy banned ChatGPT and the Biden administration issued its (since-overturned) regulations. But now we are plowing full steam ahead.

All that matters now is individual choices. Unwary, I fell for the techno-optimism of the past two decades and ended up with a diminished attention span and a bunch of mysterious subscription charges to show for it. Well. Fool me once, shame on me. Fool me twice, shame on Sam Altman. I know, much better now, the folly of turning over my own mental powers to a bunch of techies promising a brilliant future.

I’m not making that mistake again. Nor should you.


Sam Kahn writes the Substack Castalia.