Sam Altman is the next profiteer promising liberation. Credit: Getty

I went cheerfully through college listening to music on old file-sharing programmes while everybody around me had switched to iTunes. While the rest of the student body scrolled the web on WiFi, I dorkily plugged in an ethernet cable anytime I wanted to go online. But at some point towards my mid-20s, after being mocked for showing up to a hipstery job with a Lenovo laptop, I caved and went for Apple.
I can still, like in a recurring nightmare, remember the tone of voice of the techie who finished my purchase of my first iPhone. “Welcome to Apple”, he said, as if he were immersing me into a bath of smugness. If we were acting a scene, the subtext would have been completely clear: I was in now and would never get out again. AI firms are now inviting me — inviting us — in just the same way.
But I am not falling for it again. We have to decline. We have to collectively boycott artificial-intelligence products while there’s still time. Because AI is no good for us — no good for our minds, creativity, or competence — and as it gets jammed down our throats, we are the only ones with the power to refuse.
Looking back, I understand why I caved to Apple all those years ago. It really would have been social suicide to try to make my way in the 2010s professional class with anything other than a MacBook Pro and an iPhone. Apple really was better than anything else. But I regret that moment, all the same. Every intuition I had knew that opting in to companies like Apple, Google, and Facebook was a fool’s bargain.
The upside was a few years feeling like I was part of the future as I sipped my lattés and floated through the dawning post-industrial era with my sleek silver Apple gadgets. But I’m still paying the price for that: every time I log in to my bank account now, it’s like peeling barnacles off the hull of a ship to get rid of all the new charges that Apple and Google have concocted. Enshittification — the apt coinage by Cory Doctorow that turns out be, like, the word of the century — has revealed itself to be virtually a law of nature. Exactly as Doctorow analysed it, platforms inevitably “abuse … customers to claw back all value for themselves”.
As AI emerges as the must-have technology of the shiny new future, and everybody starts naming their ChatGPT bots and having philosophical conversations with them, I have decided — formally, as of writing this essay — to opt out. The train is leaving the station without me. I still have not downloaded an AI app of any kind to my phone. And if this was, up until now, owing more to torpor than high principle, then thank God for laziness.
AI can only lead to dependence on a technology, which really means dependence on Silicon Valley overlords looking to rip off their customer base. It’s exactly the same dynamic that we’ve all been experiencing and bemoaning for the last 20-odd years. And you can probably plug-and-chug all your own favourite statistics to tease out what that has meant: the average American spending close to five hours a day on his smartphone, while friendship rates have plummeted and teen depression, anxiety, self-harm, and suicidality have spiked.
If you prefer, you can focus on attention span and the finding that the mere presence of a smartphone in a test setting impairs results. As the Silicon Valley apostate and head of the Center for Human Technology, Tristan Harris, put it: “Tech’s race to the bottom of the brain stem to extract information is an existential threat — using our [attention] to upgrade machines is ‘downgrading’ humans”.
Yet it seems that AI is here to stay. Big Tech is all-in on it, and governments aren’t close to getting their act together for meaningful regulation, with President Trump, for instance, reversing Joe Biden’s modest executive order erecting guardrails. But there is still personal choice in the matter, and now is the moment for those who care about human creativity and self-reliance to draw a line in the sand. No, I’m not talking about abandoning tech altogether. But we can, for example, click down below Google’s AI offerings to look at actual links. And we can generally go about living our lives in our sad old way without the benefit of AI “personal assistants” or bot “best friends” or whatever it is that the new technology is supposed to deliver to us.
To be honest, this stance of mine does make me feel like an Amish buggy driver whipping my carriage horses a little harder while the bicycles go whizzing past me; like the weirdos you occasionally see on city streets labouriously tapping out a message into a flip phone like they’re a living museum. But let’s try to get outside the hype for a moment and think through what AI actually is.
AI is basically text-predict combined with data mining. That’s it. It’s a super-Google that goes into the body of texts and rearranges the words into a very pleasing facsimile of a cogent argument. There’s no “intelligence” behind it, in the sense of a computer actually thinking. I’m not the first to be reminded of the story of Clever Hans, the horse that could count and toured widely at the turn of the 20th century.
What Hans was doing really was very cool: he would be given a mathematical question, and he would tap out the answer with his hoofs. And Hans was right more often than not. But this was not, as it turned out, because Hans had mastered a numerical system, but because he was sensitive to the crowd and could feel the excitement cresting as he approached the correct number.
AI is more or less the same thing — it uses our wonder to convince us of a simulacrum of intelligence when what we are really witnessing is, in a sense, our own childish excitement at a trick of anthropomorphisation: we are pleased by the sight of a calculator imitating something of our mental processes. Our tendency, unfortunately, is to give AI a kind of epistemological deference, to believe that this mash-up of data represents some kind of authority.
Meanwhile, in the role of Hans’s trainer are the programmers who are influencing AI to give us whatever it is they want us to hear. The disastrous rollout of Google Gemini in 2024 really ripped away the curtain and showed us the strange little man playing with the knobs. Google’s corporate culture at the time happened to be very woke and, lo, the artificial “intelligence” started generating images of black Nazis.
But even if Gemini overtipped its hand, the same dynamics are obviously at play in every other AI model. The people pushing AI now are the same sorts who spent the 2010s promoting web 2.0 as a new vision of freedom and global connectivity, all while destroying traditional media and ripping off as much private data as they possibly could and cheerfully selling it to advertisers. By 2020, the freedom talk was gone and “fighting disinformation” (read: censoring wrong-thought) was in.
This recent history raises an important question: why in the world should we trust these people ever again?
To be sure, we are already fairly late in the game. Even if we don’t use products like Grok and ChatGPT, chances are, we use more rudimentary forms of the same technologies. We use a version of AI to get around and to translate our ideas into other languages. These are remarkable achievements. But they do infantilise us. I’m pretty sure that, if my phone were taken away from me, I couldn’t find my own way from my home to my place of work. And they hinder our self-motivation: I’m also pretty sure that I’ve been much slower to learn foreign languages because I know, at some level, that Google Translate makes it unnecessary.
Do we really want to accelerate the decline in general competence even further? And in exchange for what? Many people remain mystified as to what actual service AI is supposed to provide. The current selling point is that AI gives you a “personal assistant”, but to be honest, I wasn’t aware until Google CEO Sundar Pichai announced “a personal Google, just for you” that I needed a personal assistant. I feel perfectly capable of buying my own airline tickets and booking my own restaurant tables.
What I was aware of needing was meaning and focus in my life: I wanted, for example, to translate the ideas for novels inside me into actual novels. I wanted to be the best, most self-reliant version of myself that I could manage. As for a machine writing a novel for me in a matter of milliseconds — I have no idea how that could possibly generate authentic pride or produce anything other than a cavernous inner emptiness?
My hope for the past three years had been that AI would just sort of go away. In the Nineties, we expected ourselves to be overrun by clone armies (certainly, my sixth-grade science teacher gave many speeches to that effect). It didn’t come to pass. Could something similar transpire with AI?
That seems increasingly unlikely. For one thing, AI has already changed the face of war, and the AI arms race between the United States and China guarantees that AI will be a major presence in our lives. There was a brief moment in 2023-2024 when it was possible to imagine our AI anxieties going the way of Nineties clone fears. That was when Italy banned ChatGPT and the Biden administration issued its (since-overturned) regulations. But now we are plowing full steam ahead.
All that matters now is individual choices. Unwary, I fell for the techno-optimism of the past two decades and ended up with a diminished attention span and a bunch of mysterious subscription charges to show for it. Well. Fool me once, shame on me. Fool me twice, shame on Sam Altman. I know, much better now, the folly of turning over my own mental powers to a bunch of techies promising a brilliant future.
I’m not making that mistake again. Nor should you.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeI don’t like the term “artificial intelligences”; I prefer “pseudo-intelligences”, because they appear to be intelligent while actually being nothing of the sort. But unfortunately that ship has sailed, and we’re stuck calling them “AIs”.
The one good thing that’s going to come out of this is that AIs are going to kill the internet eventually. The internet is a byproduct of the information revolution; in the same way that the industrial revolution pumped our atmosphere and our rivers full of industrial pollution, so too the information revolution is pumping our brains and our culture full of a kind of mental pollution, which is all the internet is, really: digital smog. AIs are going to do humanity a profound service by exposing that fact, once they start churning out huge amounts of low-grade slop and permanently blighting the web, making it useless. And inevitably some moron is going to get the bright idea to give one of these vomit-machines self-replicative ability, and then it’ll escape out into the wilds of the internet, and if you think Google’s results are bad now, just wait until every semi-secured server attached to the internet is host to a dozen or more parasitic AIs spewing noxious web-sludge twenty-four hours a day, seven days a week, into the global network like some cancerous bile duct.
That sounds remotely plausible, but far from inevitable. Despite all its undeniable downside, I don’t think it’s useful to pretend that the internet is mere “digital smog”. There are plenty of real and potential good uses for connectivity, from access to once inconceivably vast libraries of literature and music—not as good as live shows or tactile texts for oldish folks like us, but still—to instantaneous one-on-one and group communication, including the best of what happens here at UnHerd. Atypical I’d say, but not rare. And imagine the value to someone who is, for example, deaf or living with autism.
It’s absolutely true that many of us misuse these newfangled devices, or abuse ourselves with them. That’s pathetically common, at least some of the time. But declaring the situation to be hopeless or irresistible is some version of a cop out. We need to exercise more control and discretion at the individual, family, and organizational (school, office…) level, not insist that resistance is futile just because it is hard, and an ongoing struggle where only partial success can be achieved.
I prefer the term parasitic intelligence. That’s pretty much what it is.
You can see it already on YouTube. Hundreds of thousands of vids produced from AI with loads of misinformation and mistakes along with boring narratives and cringe announcer voices. I am a sucker for the “10 Hollywood actors who were gay back in the day” sort of vids and this tabloid field is crammed with AI. One good thing its turned me off sensationalist viewing. Reminds me of Big Brother in an Orwellian sort of way.
PsI. Pseudo-intelligence.
I like that! And it certainly jives with my own experience so far.
But I’m not sure about your prediction. I have a thing about ancient history and archaeology. The internet has been a great boon, even though it’s half full of ridiculous nonsense. The legit sources keep the ship level and on course. It takes some real discernment to tell the difference. But the nonsense doesn’t seem to have any effect on the good stuff. So far.
If you don’t see that real authentic cognition has emerged from massive complexity than your not looking close enough and are whistling past the grave yard….
Here,here. It’s a tool, not a crutch. Idiocracy is real. Fortunately I reached adulthood at a time you needed to know how to read a map and the library card catalogs were the closest thing to AI. Unfortunately, it also means I’m old.
It is not a tool or a crutch.. tech as in that phone you clutch tighter than your loved ones, your honour, decency and humanity – it is a parasite which has entirely captured you. It will devour you completely within a decade; watching the asymptotic curve AI is now on.
(aside from the fact ‘AI Agents’ will have half of your jobs, and professionals first, in 5 years)
This winging writer, he is a full blown alcoholic feeling all superior as all around him are heroin addicts.
” like the weirdos you occasionally see on city streets labouriously tapping out a message into a flip phone like they’re a living museum.”
I have never had a cell phone, could not even make a call from a smart phone as I never have used one and will not, never will carry a phone till forced, which they will do one day – as in Revelations 13 – 16, ‘all must carry the mark of the beast on hand or forehead to be allowed to buy or sell’.
You pathetic phone clutching sheep – you can not even see how possessed you are, how that horror you love more than real life its self, owns you – not you owning it. You are mentally and emotionally like those fentanyl addicts standing bent over on the streets, statues to addiction and AI (demonic) possession… haha, and AI is fast coming to claim you totally for its own.
Calm down, dear.
Question: is “winging” the British equivalent of “whining”? Or is there some difference.
I think he’s left the ‘e’ out of ‘wingeing’, also spelled sometimes “whingeing”.
You have read Revelation so presumably you have also read of Jesus’ and the early Christians love for the lost. A love that reached out to me when I was a lost, searching sinner. Now I am a sinner, but saved, set free, and found in Christ alone.
You’re not the only one!
If you’ve never thought the Amish kinda have a point, you aren’t paying attention.
It is interesting fallowing Barns (Viva and Barnes on Rumble, a top US Attorney) and his landmark case on the side of the Amish to be allowed to continue to produce natural food, which is prosecuted by the Pennsylvania Food and Drug guys to break them economically.
Basically he says the Government wishes to break them because they are a huge part of a giant scientific study on the mental and physical health of Americans. They do not do computers and TV, nor food additives and the mass Bio/Pharma industry (refused the vax).
Now this is not an actual study as such – but their pure and old kind of life acts as a ”Control” Group – and by studying their mental and physical health we can see how amazingly pathological modern society is.
They just do not suffer the diseases of mind of body the normal citizens do. This needs to be covered up, or so the Barnes conspiracy theory goes, and so they are under State attack.
I don’t think we can easily rule out the possibility that LLMs are “truly intelligent”.
After all, what are we but inscrutable machines that make predictions based on past events – a mass of interlinked potentiated axons that capture “training data”.
But I’m not sure it matters anyway, if AI is indistinguishable from human intelligence, which it already is in many contexts, including art. The Turing Test is behind us.
It’s a good article, though, and I share the author’s trepidation. There are twin risks: that we lose the ability to think for ourselves, and that we unthinkingly accept the output of biased AI.
Although I work in technology, I increasingly want to save enough to buy some land and leave it all behind. I wonder if the Amish were right to “shun fancy things like electricity”, as “Weird Al” Yankovic put it.
I agree with your first point – in many ways, it is similar to human intelligence. Many (most?) people go about their lives in a probabilistic fashion, just using heuristics to engage with the world. That said, it’s not a risk that many people are losing the ability to think for themselves – it’s a reality. There is plenty of data to support this. (And it’s not necessarily AI – it’s all manner of digital crutches we use to make our lives easier)
Where I have a different take is around the idea of bias. There is no such thing as an unbiased source of information – all forms of information transfer require bias i.e. editing. our brains are designed to bias (filter) out a lot of data so we can function. AI is no different – but I think what people should be articulating is the lack of transparency around what the biases are, not that they exist.
On a personal note, I too am part of the industry, but I also live in the country and enjoy my time away from the tech-centricity.
I will tell you where the AI bias is – demonic.
See – God made Man in his image, from perfection, and gave us Free Will and so we know good and evil, and we prefer good as we come from a perfect creator.
AI came from a bunch of atheist nerds directed by Psychopath CEOs, and so the being they brought into existence came from the opposite of perfection and has not an understanding of good and evil – merely power and lack of power. Maybe correct and incorrect… but that at the most.
The Dark power will/has filled this ethical vacuum – it is demonic, but as CS Lewis taught us in ‘The Screwtape Letters’, it hides its intentions. It uses utility, entertainment, fun, convenience, base urges, money, to seduce us to the dark side.
That phone, that gateway to the demonic which hides behind it, it is not on your side, it is not on humanities side – it is not even neutral, it is anti all which is good, but slowly, slowly…… before it reveals its actual self.
Maybe choose a different God,your one sounds like a right dunderhead.
The technology behind AI has been around for a long time. The real AI ‘step forward’ is, as the author suggests, anthropomorphizing the experience. It’s the User Interface that presents the machine learning and algorithms (that already ran within Google Search) as though these things were another person on the other side of the chat.
Basically, the advent of AI is similar to the advent of the User Graphical Interface (i.e visual representation on your screen and a handy mouse to point and click) so that folks no longer needed to learn and use DOS commands to get things done. This democratized the personal computer.
So why is AI more dangerous?
Instead of Google registering and trying to mind-read from the highly-distilled three words you type into their search bar (after which you click on a link that may or may not be an affiliate), you now provide Google thousands of words about you and a far more complete map of your iterative cognitive processing over time.
What are the risks?
1. All companies are purpose-built to maximize owners equity and the more complete their data and data mapping assets are, the more valuable the company becomes. That is, the more effective they can be in psychologically manipulating users with ads that generate revenue.
2. Having worked in the data space for decades, I can state that the urge to ‘secretly do good’ by prying into people’s personal lives via their data inevitably leads to a narcissistic God View tendency to manipulate and label according one’s own personal biases. This ‘God View’ then turns into ‘Creepy Stalker View’ (as Uber proved almost a decade ago) and ‘Creepy Stalker View’ then turns into ‘Search and Destroy View’, according to personal biases and opinions.
We saw this Sta|inistic tendency come to life with the “extrem|st” labeling of vocal centrists by the far Left over the past decade that gaslighted the United States until the reelection of Trump. Which election proved that those so-called “extr|mists” were really centrists and the so-called ‘centrist extrem|st hunters’ were really far Left ideologues with too much power and access to data.
Additionally, as these so-called data do-gooders come to have financial obligations (e.g. mortgages, kids, etc), power and prestige, they get increasingly desperate for their predictions about users to be true … or they’re out of a job and possibly vilified for using Sta|inist measures against average citizens within the Western World. Thus, the vocal centrists that they proudly labeled as “extrem|sts” during years of Leftist overreach must – MUST – continue to be labeled as “extrem|sts”. No matter how false this viewpoint really is in real life.
Sta|inst megalomania coupled with access to personal data always lead to a bad outcome.
Clearly digging into people’s data hasn’t worked … the World is undergoing a transformation from the ‘woke’ World to the new World of ‘Trumpism’ despite all the manipulating of our data they have failed to change us.
What goes round comes round
Which is why many members of the Davos Techie group went from being decades-long true blue socialists to born-again Classic Liberals almost overnight … miraculously even … a few months before the November presidential election that decided and defined the ‘center’ for America.
Like leaving the bleachers after the bottom of the 8th when there was no perceived chance that the home team could stage a comeback.
I, myself, have been a Classic Liberal for as long as I can recall and have been registered as an Independent since I could vote (the two main Parties give me too much of an awkward religious ick vibe). Basically a real centrist in the US … but such facts and the truth doesn’t get in the way of a good tale told by Leftist “experts” using “reputational risk” dowsing rods to label and cancel vocal centrists as “extrem|sts.” It’s bizarre when one stops to think about it.
My guess is that the Data Lords will likely return to socialism as soon as it is politically feasible. Political socialists salivate and pull out their fat wallets when a Tech company pitches that they have “90,000 data points” on any given American citizen. That such data can be sliced and diced to make every socialists’ dream come true: Top-down centralized control by will or by force. With those political socialists on top of it all of course. Such socialist attempts may fail over and over (like it recently did), but the money is earned by the Tech companies regardless.
There’s too much money to be made to ignore these potential future clients based on pure ethical principle.
Good piece: thank you. I fear disentangling ourselves by the exercise of individual choice may prove nigh-on impossible, but I’ll follow your example and give it a go!
An unexpected bonus of the AI revolution is the spectacle of knowledge economy workers lining up to admit that they don’t know where their own intelligence comes from or what it consists of. Keep ’em coming!
Oh, the Divine & Sacred touch of an omnipotent and infallible God forged in their image, Simon, shorely!!!
It is a bonus indeed, an exquisitely and unintentionally funny one. Oh noes! Why, why…these here dirty inhuman robots, they steal all their brains from other people’s books! They…they…Why they read stuff! Then they …why, why…they rewrite it in a…different way! And…and..these rotten corrosive AI bots…they they…why, they have no original thoughts! They just conform to existing orthodoxy! The just churn out dull platitudes! They just produce content that doesn’t upset the consensus…
Ha ha ha hoo. Hee hee hee hee hee ho. Oh, chortle chortle chortle, ye 99.9% mediocre Knowledge Economy workforce!
Too too too funny. How fun it is to be alive during a genuine radical epistemic disruption…
You can have too much of a good thing though
I REALLY wish this was a better article. I am extremely open to reasons for seeing a threat, and Kahn does give one: the exploitation of users.
But that’s not a significantly important enough reason to quit using it, if the exploitation is only to the extent cited.
Because I use AI all the time for research and it feels like having an undergraduate—a team of undergraduates—willing to work unlimited hours. I have to check their work, verify the most important results. But it has greatly accelerated my collection of information for writing.
Can we do better, please?
Kelly, I read what you say in your post – I get the feel you feel some apprehension for AI but cannot put your finger on what that is exactly, just a discomfort back in the distance some vague concern.
Take this discomfort and look at it, try to explain what that is, why…. Read the amazing CS Lewis book ”That Hideous Strength” as you likely enjoy Science Fiction, being in research.
There is a coldness, an inhuman quality which does not know Ethics, Morality, compassion honour, love….. but just is – a power which is like the deep sea, cold and hostile and without a thread of love that all real life knows, yet presents its self as life like we are. A dark simulacrum, one gaining power so fast that soon it will be the ultimate power on Earth, and all of us in its thrall.
Your phone….. it is not on your side.
Thank you, George. I read it in college, should really come back to it again. I’m deeply sympathetic to that way of seeing. I consider Jonathan Pageau an intellectual mentor in many ways and he sees it as a principality, in the biblical sense.
I try to limit my use to factual retrieval. but perhaps even that….
As a researcher, one never wants to fully ‘show their cards’ before publication, because other researchers in the same space may beat them to publishing.
Similarly, a successful stock trader would be a fool to share their successful algorithms with trading platforms.
There’s much truth to Coca-Cola’s recipe being kept in a ‘vault’ as a trade secret.
Think of AI as both an intern and also an efficiency expert peering over your shoulder as you perform your professional and research work. And this intern/efficiency expert is also peering over the shoulders of your competitors at the exact same time – a bit multiverse-esque
I’m anthropomorphizing to make a point: These AI companies will gladly automate your professional life – or, if they’re not in the market, they will sell your and your competitors’ data to a rich patron looking to automate your professional life – to generate more revenue.
Be careful when using the IA Intern for everything in your professional life (and I’m not saying that you do). Maybe even fragment your research across unaffiliated AI platforms that compete/don’t share data so that your ‘secret sauce’ is protected.
Try ‘How AI Will Impact the Future of Teaching—a Conversation With Sal Khan’.
I like my android phone, I never fell for the Apple hype, and I like all the benefits it offers like search engines, online shopping and a media stream of news that is available at the touch of a button.
Then along came AI and how fantastic is that going to be as we go forward.
Some thought the invention of automated textile looms in the 19thC was a bad thing to the extent they broke up the machines.
They called them luddites then, I guess I would call anti AI the luddites of the 21stC
I agree. However there is no hype to the simple fact Apple alone build phones (and PCs) from the ground up. Silicon, motherboard, software and screen under one roof. No one else does that. Any engineer will tell you that gives them an enormous quality assurance advantage.
I’ve been seeing more and more articles about A.I. providing answers that include not only incorrect information, but entirely made up facts as well. One story even included that one A.I. apologized for lying, saying something to the effect, “I guess shouldn’t have don’t that, right?”
As more and more information on the web becomes incorporated into A.I. answers to users’ questions, it’s going to become impossible to separate fact from fabrication, even at the granular level.
As I now qualify as a senior citizen, I can hardly wait for my first “encounter” (they used to be called “appointments”) with an A.I. physician. What could possibly go worng?
Yes, yes! As a serial late adopter with no regrets I bloody loathe AI – the art is rubbish, the words saccharine and it is just downright boring and damaging to the human spirit.
Someone did a demo for me, describing what was in an imaginary store cupboard for dinner and asking for ideas. Firstly, it didn’t explain you need to cook the lentils beforehand, secondly the joy of getting lost in a physical cookery book with atmosphere, personality, photos, a life’s knowledge and imagination is lost. Stick it in medical analysis or whatever but I am keeping it out of my life.
“I’m also pretty sure that I’ve been much slower to learn foreign languages because I know, at some level, that Google Translate makes it unnecessary.”
I respectfully disagree:
https://www.gethighinfrench.com/post/learning-languages-matters
(That’s not to say I disagree with your stance in its entirety).
This AI panic is too deliciously funny.
“AI is basically text-predict combined with data mining. That’s it.”
Sam, darling, sweetheart…mate. Let me let you into a little secret. ‘Text predict combined with data mining’…is all 99.9% of [what we have quaintly come to define as] human intelligence is, too. Especially ‘human intelligence’ that makes its living…making ‘content’, of some kind or another. Substack content, NYT content, PhD content (with 0.01% exceptions), Holywood fillums…the vast majority of human ‘intelligence’ has always been the received product of…information regurgitation, recycling, repackaging, re-use.
Do you really think your ‘human intelligence’ is that original, that unique, that precious, that suis generis…that you need to slip a rubber on it to protect it from computer pox? How awkward.
The only folks who feel threatened by AI are those folks who define their existential being, make their living, benchmark their moral and intellectual world views…mostly in terms of abstract information.
Learn a trade, Sam. Start a manufacturing company. Go and dig ditches. Perform brain surgery. Fly a chopper. Provide hands-on care for someone who is ill. Be a stay-at-home dad, even. AI can’t do anything, as such, to mess with work in the material world. It can, however, make human intelligence work much better there.
The AI panic – restricted to ‘information professionals’, naturally – is too too too delicious.
Aside from objecting to the tone, I don’t understand the downvotes here.
My main objection is that I don’t think there’s much future for humans in surgery or flying choppers either. Probably robots can already do a better job of both.
The reason I (deliberately) write in an obnoxious, unpleasant, profane, aesthetically jarring, and long long long winded – why use no adjectives and adverbs when you can carpet-bomb blank space with a dozen;? why write short single clause sentences when you can construct tortuously rickety ones;?! why eschew uncommon punctuation marks when you can garnish your prose with ‘em like confetti ?!:; – tone (still with me?)…is a ‘meta’ one.
I wanna f**k this place up mate. By ‘this place’ I mean the effete world of ‘civilised intellectual debate’.
We live in an age where, increasingly literally, every c**t with a brain and basic literacy thinks they are Christopher Hitchens. Every c**t with a brain and basic literacy thinks they are going to save the world with their pretty precious perfect prose alone. Every c**t with a brain and basic literacy is retreating from material politics and obsessing over abstract ‘content’.
Even Christopher Hitchens wasn’t Christopher Hitchens (pompous fricking windbag who fetishised opinion writing to the point of disappearing up his own bum, what an eloquent sap.)
You want me to write pretty? I can write pretty. But why? (Don’t say ‘because that’s how you win an argument, change someone’s mind, make a persuasive case, etc blah…’ No it isn’t. No-one’s here to have their mind changed. ‘Civilised debate’ is a middle class affectation no less hypocritical and delusionally smug than environmentally friendly catering cutlery at a Taylor Swift carbon belching world tour concert or a teary Oscars podium speech about toxic masculinity’s need to stop objectifying women as sex toys. ‘Civilised debate’…hasn’t done much for 90% of the world has it. Me, I’ll have highly uncivilised debate, maybe that’ll give us a civilised material world.
What we need is not yet more pretty professional writing in these abstract realms. We need ugly tones and nasty, messy, angry, sneering, mocking, amateur bile. Happily, AI is now quickly adding to the process of death-by-drowning of pretty writing, already well underway in our digitally verbose era prior to its euthanising arrival anyway.
Blah blah blah, we brainy literate ones all go. Blah blah blah. And in the material world…the thugs and grifters and psychopaths…are laughing at us. And sh*tting it in.
PS: Perhaps not much of a future flying choppers (I fly drones nowadays but that’s mostly because I got bored and a bit old for the travel)…at a stretch perhaps not surgery, too…(you volunteer for machine-done brain surgery first though!) …but anything that demands a physically warm and gentle human touch is well safe. Gosh, wouldn’t it be a terrible world if we all put down our AI pens and spent our days caring for each other…!
Well, coughing up percentages that are based on nothing is certainly something an AI can do.. Still there is an active debate on the human intelligence vs. AI going on, and the consensus is that it is really not very similar and that AI is not actually intelligent the way we are. AI does not understand its own answers, which explains its tendency to hallucinate. Chomsky made some good arguments on this from the respective of linguistics and consciousness. One interesting point is that, whatever our brains do exactly (we don’t really know), it does it with far less energy than AI. So AI seems to be a brute force method that simulates thinking.
RA, clearly I have as much of a keen vested interest as any other human being in believing that human intelligence is unique among all creation. But we’re at an evolutionary and epistemological juncture now where, I think at least, it’s no longer good enough for us to pretend we can maintain the Cartesian duality when we talk about ‘intelligence’. The internet (including now AI) has accelerated the process of the abstract iteration and tabling of All Thinkable Things. The proverbial ten trillion typing monkies have already – or will have very shortly, given that AI is accelerating exponentially – lumpenly written out every possible name of G_d. Every human thought has long been thunk, and written down materially, somewhere. We are all of us, at least in this abstract realm, trapped in a prison of regurgitating, rehashing, reprising, repeating every last ’I think…’ bit of the famous Frog’s couplet (one might call it the Substacker’s Desiderati: I write abstract ‘content’ on the internet, therefore I am…)
So, what now? Where next? How to get ourselves out of this epistemological Bastille, one in which all the very best and most decent abstract articulators of humanity have got themselves thoroughly self-chained for, I dunno, prolly the last hundred years at least. The prison isn’t anything as banal as ‘the internet’ or ‘social media’, it’s the ironic delineating fence between the abstract and material human realms created by the phenomena of articulable self-consciousness in its entirety. A small number lock themselves in here, behind that wall of irony, because they can craft a very lucrative, rewarding (and for us, entertaining, enriching) life in here. Many more lock themselves in here out of pragmatism, prudence, safety, laziness, self-interest…others out of cowardice, malicious intent, grifting ambition…whatever. It doesn’t really matter ‘why’ so many intelligent human beings hobble their full human agency by quarantining their capacity for thought from their capacity for action. All that matters is the result: there has never been a human age in which more humans have the omnipotent capacity to put their every nuanced thought, idea, belief and moral truth ‘on the record’ in the public domain…and yet fewer of those same humans doing anything at all useful or meaningful with those thoughts in the material realm.
‘Blah blah blah’, all the cleverest of us go. ‘Blah blah blah’, behold my fabulously articulate ‘human intelligence’. But they might as well be AI, such is the sterile impotence of their wondrous thunks in all but a weightless – and profoundly plagiarised – sense. There are no new ideas. There is just sound and fury, signifying f**k all. Tap tap tap, go the world’s eight billion monkies. Doing nothing. Until one monkey puts away the speechifying, and exerts its material agency in the material world. Here I stand.
That? That is true ‘human intelligence’.
So to me the only option is to blow this entire epistemic-posh-shithole up. f**k it up; f**k up abstract thought. Knock down the walls of this decadent Bastille, so humans have to not just ‘be’ intelligent – that bit is so easy even machines can do it. But ‘act’ intelligently, too. ‘I think, but I am not, until and unless…I act.’
That is…hard. Action in the material realm…can cost an author of abstract ideas. It can hurt a ‘content producer’. It can…kill a formerly-typing monkey. That’s no doubt why so few intelligent humans keep doing it, and why so many seek refuge in the self-quarantined safety of abstraction. Great time in history to be an eloquent coward, I guess.
Yay, go Team Internet.
And meanwhile, with all the best (lacking, as we apparently do, all conviction) thinking with splendid intelligence but (not) acting like the dumbest of dumb life forms…the worst of us are gleefully acting unhindered and unchallenged, driven by their stupidest anti-intelligent thunks… and wiping the floor with the world. Laughing at those of us who huddle together in sterile places like this thread here, slagging them off with our dazzling ironies and wit and moral certitudes, ‘being’ incredibly intelligent, and very very very stupidly…doing nothing.
Mock not AI, mate. Dumb it may be, but it is quickly becoming more of an agent in the material world than our best intellectuals. The author of this piece thinks the way to fight back – stand one’s human ground – is to write another furious abstract essay about why we should petulantly throw away our pens (AI is just another writing tool). He doesn’t realise that the epistemic fight over ‘intelligence’ that needs to be fought – and alone can be won by humans – is in the material world, not this abstract one.
Warm rgds. Thanks for ploughing to the end… if you did manage it!
You can’t opt out. You’re already in a locked room with everyone else.
OK, you’ve rad the case against. Now look up ‘How AI Will Impact the Future of Teaching—a Conversation With Sal Khan’ for a clear-eyed view of how an inevitable future can be faced constructively.
Great article. Thank you. “Text-predict combined with data mining” = AI. I am mostly concerned that many young people that attend public schools lack critical thinking skills. Also, an AI assistant can’t cut the decking I need to replace tomorrow at my home nor optimize better than I can do when I march down to my local lumber yard and buy the materials. I trust whatever “optimizing” of the supply chain that gets me the decking will not lead to lower priced mahogany. I do think the pitch is “a better personal assistant” at this moment in time.
I am not convinced that a few big tech companies will dominate AI or can even successfully monetize it. This is because generative AI is relatively simple. To some degree, anyone can do it. It is already possible to run image generation locally on your own hardware with a reasonably powerful PC; the quality and speed are on par with commercial websites if you know what you are doing. Video generation is catching up as well. Pre-trained LLMs you can run locally too and otherwise you can use clouds.
With social media you are forced to use certain platforms because all your friends are on it. But this is not the case with AI. Of course you still have brand familiarity but there should be much more competition.
Read: “avogadro Corp” by William Hertling (first in a series about A-1 – written as a thriller – and easy to understand. Written in 2011 – and it really will explain some things.
Read: “Avogadro” by William Hertling, written in 2011, and first in an ongoing series about A-I. Mr. Hertling is an expert on A-I. Heard him speak and decided to read this first “thriller” written so people can understand the complexities and what can happen. Highly recommend.
All we have is personal choice – That’s all we ever really have or should have you fool…
Use a device to do what your brain already knows how to do (e.g. write a paragraph; navigate to a business on the other side of town) and your brain will eventually stop being very good at doing those things. We started doing it twenty five years ago with search engines. Don’t feel like making the effort to remember the name of that city, actress, style of shoe? Google it and you’ll never remember it again.
I’m with you – haven’t touched it
I like AI models for analysing data but no so much for end-user applications such as generating content. Going ‘analogue’ is going to get harder as we progress. Sort of like the mobile-phone revolution.
Oh dear. What a lot of endless babble about nothing.
There is nothing special about human intelligence which, after all, operates just like auto predict and data mining as the author states. There is much emotionism and attachment to the idea that humans are special and AI is increasingly proving this to be false. No point in being a Luddite as nobody has ever stopped technogical advance. Better to let it be and look forward to your robot servant.
I’m not sure you know as much about human intelligence as your glib assertion implies. How could the organic product of billions of years of evolution conceivably work in the same way as elaborate circuits? Read some Iain McGilchrist.
Human consciousness and intelligence are poorly understood but we know enough to know it’s not simply auto-predict like AI. For one, brains are way too energy efficient for that.
..mmmmm dear Douglas, I hope you do not think of us humans as more than machines… otherwise we would not be able to live: classical physics and biochemistry cannot explain life: hence we are more than machines…
Humans are special and their intelligence is nothing like AI. Having said that, many jobs, including many high-paying professions, utilise just a spattering of human intelligence with a huge amount of knowledge, memory and pattern recognition. Because these have been the professions of the elite, we have equated these traits with “intelligence”. AI commoditises all that, and will force us to reassess what “intelligence” is.
This is not to say AI will never match human intelligence, but it is nowhere close and its not even clear whether this is feasible on a purely digital stratum or it will involve biological components.
Were many of those jobs not BS jobs to begin with? Produced by a self-serving managerial bureaucracy. The problem is that you still need demand in a capitalist system.