
“The government knows AGI is coming”, the New York Times’s Ezra Klein tells us, “and we’re not prepared in part because it’s not clear what it would mean to prepare”. We’ve all heard of these prognostications by now. On one end of the spectrum are “Doomers” who warn that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die”. On the other end are accelerationists who trust AI to solve problems beyond the reach of human intellects. Might a “cure for ageing” lie in biological patterns that are invisible to us, but discernible to a sufficiently advanced machine-learning algorithm? Might AI vastly outperform human civil servants in devising public policies and administering government services?
Elon Musk and many other AI-industry boosters certainly want us to think so. DOGE is relying on AI not only to identify supposed “waste” and “fraud” in government spending, but also to replace tens of thousands of federal employees and contractors. The assumption is that whatever services they provide can be performed more efficiently by a chatbot trained on government data. A large language model capable of outperforming human civil servants at any cognitive task would amount to something like AGI, an outcome that OpenAI’s Sam Altman has long insisted is reachable simply by feeding more data and computing power into large language models.
The question for the rest of us is how to make rational choices in the face of such hype. The claims being made about AI’s potential are examples of speculative futurism, an increasingly lucrative and culturally influential form of prognostication that capitalises on what we, in How to Think About Progress, call the “horizon bias”: Our cultural propensity to systematically overestimate the proximity of technologically driven outcomes. Although Altman’s promise of AGI, like Musk’s even older promises of self-driving vehicles, has repeatedly been postponed, our technology-obsessed society is primed to buy into such promises, and the least scrupulous among us are prepared to profit from our credulity.
If you are on a long march and you can see your destination off in the distance, it is natural to assume that your journey is almost at an end. But as Frodo’s experience in Mordor shows, the last mile can be far more difficult and roundabout than you expected. The horizon bias becomes particularly potent when we are presented with a seemingly clear sequence of steps from our present reality to some speculative future scenario. By telling ourselves exactly what it will take to get from Point A to Point B, we create a mental model of change that inevitably includes discrepancies with the world as it is.
Consider how easy it is to believe that the cure for cancer is imminent every time there is some new technological breakthrough. “Hey ChatGPT, what’s the cure for cancer?”, mused the Future Today Institute (recently rebranded as the Future Today Strategy Group), a corporate “advisory firm specializing in strategic foresight,” in a tweet last year. While politicians, scientists, and technologists have been promising “the cure” ever since Richard Nixon launched the War on Cancer in the Seventies, it has never materialised. Yet, we remain eager for the next story about an imminent cure because we have absorbed the modern mythology of ourselves as toolmaking masters over nature. For a society that has been to the moon and eradicated numerous other diseases, surely a solution to unregulated cell growth cannot be far off — right?
Yet even in an “AI renaissance” where machines can analyse oncology data in ways that humans never could, we will still be dealing with the messy complexities of human biology. Every human body is unique (as is every tumour). Moreover, if AI becomes capable of entertaining beliefs about its own capacities and future possibilities for itself, it, too, will be prone to horizon bias, stumbling over unexpected gaps between the real world and the simplified models that will guide its recommendations.
This is not to suggest that either the utopian or dystopian vision of AI is impossible. But it is to question the value of the speculative futurism industry that has come to dominate our collective expectations. As corporate consultants, professional futurists make good money catering to businesses’ fears of uncertainty by offering apparently scientific “strategic foresight” on any subject for which there is a paying subscriber. It is in their interest to present anticipations about the future in ways that seem closer to knowledge than mere opinion.
Look back three years to the Future Today Institute’s 2022 Tech Trends Report, for example, and you will find a bold prediction that “synthetic biology will make ageing a treatable pathology”. Yet since the report prudently avoids offering any timeline — for when ageing will become a “treatable pathology” — it becomes difficult to test the validity or at least the precision of the claim. Nor is it easy to assess the organisation’s complete track record. When asked about its earlier publications, a spokesperson replies, “Unfortunately we no longer shelve our past reports. Have a nice day.”
Such drab commercialisation was a long time coming. Various “futures studies” theories and methodologies have been formalised, frameworks for assessing “foresight competency” have been introduced, and futurists have increasingly adopted a shared body of jargon. Hence, the mid-20th century futurist Bertrand de Jouvenel graced us with the term “futurible,” meaning any “future state of affairs” whose realisation “from the present state of affairs is plausible and imaginable”. Most futurists would say that they “don’t make predictions”, and yet this obviously comes with the territory, especially when there is demand from paying customers. If you cannot create the impression that you are better than others at forecasting future probabilities, you have no competitive advantage.
The 20th-century bibliographer I.F. Clarke traces the roots of modern futurism as far back as the 13th century, when the mediaeval monk Roger Bacon foresaw that the deepening of scientific knowledge could lead eventually to self-propelled planes, trains, and automobiles — as indeed it did, though not nearly as soon as he expected. Such thinking was novel for the time, and it would remain in the cloisters for another three centuries, when the Enlightenment saw books like Sebastien Mercier’s 1771 utopian novel, L’An 2440 (The year 2440).
Channelling his era’s faith in technology-driven progress, Mercier described a future of peace and social harmony, governed by philosopher-kings. In his envisioned 25th century, slavery has been abolished, the criminal justice system reformed, and medicine subjected to science-based rationality. But he also anticipated the territory of North America being returned to its original inhabitants, and he thought that Portugal might become a part of the United Kingdom. In his future, taxes, standing armies, and even coffee have all been abolished. Had he been a corporate consultant, it is unclear whether his clients would have been better prepared for various future scenarios than their competitors.
With numerous editions and translations appearing in the decades after it first appeared, Mercier’s work of speculative prognostication was a wild commercial success. From then on, each generation brought a new host of what Clarke calls “professional horizon-watchers”. Technological innovation had made predictions common, and though earlier practitioners’ techniques were nowhere close to as sophisticated as those used by futurists today, their basic method was the same: by extrapolating from the latest breakthroughs, they envisioned new realms of plausibility.
According to H.G. Wells, in his 1902 lecture, “The Discovery of the Future”, “in absolute fact the future is just as fixed and determinate, just as settled and inevitable, just as possible a matter of knowledge as the past.” With the arrival of the kind of total wars that Wells had, to his credit, anticipated, projects to foresee the future were taken up in earnest. The upheavals of the first half of the 20th century created an urgent demand for technocratic planning, giving rise to “operations research” and, with it, the modern think tank (epitomised by the Rand corporation).
In 1968, the Palo Alto-based Institute for the Future emerged as the first self-identified futurist institution of its kind. Then, in his 1970 bestseller Future Shock, Alvin Toffler offered a “broad new theory of adaptation” for an age of accelerating technological, social, political, and psychological change. Inspired by the more well-known concept of culture shock (the experience one feels upon suddenly arriving in an alien social environment), Toffler coined his titular term to describe the psychological distress that comes with rapid, monumental change. One of the best ways to cope, he believed, was to adopt more of a future-oriented perspective, so that we are not constantly caught off guard by each new society-altering trend or development.
In the half-century since Future Shock appeared, the widespread sense of constant, rapid change has only deepened. But rather than being shocked by it, we now regard acceleration as a central part of modern life. Everyone assumes that each passing year will bring faster, cheaper, sleeker, and more powerful technologies. Not a week goes by without headlines about new breakthroughs in AI, biomedical research, nuclear fusion, and other promising vistas of progress on the horizon.
This can cause real problems in practice. In Imaginable: How to See the Future Coming and Feel Ready for Anything — Even Things That Seem Impossible Today, Jane McGonigal of the Institute of the Future argues that everyone should train their minds to think more like a futurist. “The purpose of looking ten years ahead isn’t to see that everything will happen on that timeline”, she writes, “but there is ample evidence that almost anything could happen on that timeline.”
By leading us to consider underappreciated or underestimated risks that may lie ahead, this is sound advice. And yet, the same methods also encourage us to overestimate the likelihood of positive breakthroughs and possibilities. As McGonigal herself concedes, an ample body of research in psychology finds that “imagining a possible event in vivid, realistic detail convinces us that the event is more likely to actually happen”. The futurist methodology rests on a foundation of radical open-mindedness, even wilful gullibility.
According to “Dator’s Law” (coined by the futurist Jim Dator), a fundamental principle of today’s futurist methodology, “Any useful statement about the future should at first seem ridiculous.” McGonigal thus asks us to consider the statement: “The sun rises in the east and sets in the west every day.” This could become technically true if humans travelled to Mars, where sunrises and sunsets wouldn’t happen “every day — at least, not by our standard definition of a “day” on Earth”. As “evidence” of this possibility, she cites the fact that “there are plenty of space entrepreneurs trying to develop the technology to help humans settle on Mars as soon as possible”.
Yet, surely, claims made by entrepreneurs promising to send humans to Mars aren’t really evidence at all. Musk has been promising his missions to the red planet for years, only to keep moving back the target date (from 2022 to 2024 to 2026 to 2028). He and others making similar commitments have a financial interest in creating the impression that exceedingly difficult feats are eminently plausible and thus investable. It is little wonder that the futurist discipline and the tech industry are so closely intertwined. All are in the business of selling a specific vision of what lies ahead – of capitalising on the FOMO that afflicts everyone who didn’t buy Nvidia stock in 2022. Rarely do we pause to consider what elements of the picture are intended to be self-fulfilling prophecies, or what alternative possibilities are being left out entirely.
By now, the con should be obvious. If Altman truly believes that AGI will render market capitalism as we know it obsolete, as he recently mused, why does he care about the competitive challenge from DeepSeek? Why is OpenAI rushing out new reasoning models that expert observers suggest have not been “adequately tested”?
While a well-meaning educator like McGonigal wants us all to be “ready to believe that almost anything can be different in the future”, there are many more in Silicon Valley who stand to gain from having a public that is ready to believe anything — be it dubious entrepreneurs or their fellow travellers in the corporate consultancy business. Speculative futurism — and our cultural obsession with its offerings — is a boon for those seeking more funding or support for glitzy projects like ending ageing, colonising Mars, or creating superintelligence. But every dollar invested in these questionably feasible pursuits is a dollar not going to support education, public health, and other more immediate “boring” needs.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThis type of discussion hides two latent ideas. Firstly, that AI is ‘singular’ – it produces only one answer or viewpoint. And secondly, that if only we were intelligent enough to see it, there has to be one answer for everything. This leads to the dystopian viewpoint that AI will ‘know’ that answer and tell us what to do.
In practice, there is no single answer to everything. Everything is a trade-off of different interests. Every decision is a balance of likelihoods and estimates. Conflicts and horse-trading are inevitable, and the outcomes will be unpredictable, following paths like meltwater running through a snow field.
Secondly, by now we should realise that AI is ‘shaped’ and ‘tuned’ by the user. Change the prompt and AI will produce a different viewpoint – it isn’t definitive, merely useful. In the end we still have to make a judgement on the output from the AI which we can accept or reject, or modify. And that output still needs testing in reality – it will be a smart guess, but that doesn’t make it true without ‘out of sample’ validation. It is not a God, but a tool that is determined by how we use it.
Further, it’s speculated that AI could produce ‘answers’ that we find unintelligible, at least in the short term.
Humans sometimes find ‘answers’ after absorbing certain inputs (information, experience, etc.) and having pondered certain questions, an answer will arrive. It can happen upon waking, for instance, having allowed the brain to do its work unconsciously during sleep. We don’t understand, or at least not fully, how a solution was arrived at but we can recognise it as valid.
The ways in which AI arrives at potential answers may never be known, so we can’t test the logic, or whatever process was employed. We can only make a judgement on its validity through testing it against the natural world, and the answers to some really complex questions may be untestable.
The way AI arrives at answers WILL always be known, at least the general principles, because we humans designed it. Intelligence does seem to be a different thing from consciousness and self-consciousness; and one important question not asked much is do AIs need to be conscious to be properly intelligent.
A sensible analysis, Saul. But I think there is a confusion here between what different people mean by “AI”. A true General Artificial Intelligence would be sentient, self-aware and – potentially at least – capable of defining its own goals. Ones that might conflict with Humanity’s best interests.
The “Artificial Intelligence” we talk about today seems rather to be a massively parallel Data Mining, Consolidation and Summarising Engine.
The former we should be very cautious about, the latter not nearly so much. Apart from its impact on economies and human employment patterns. Which could be benign or malignant, and will probably some mix of both.
Your critique of futurists raising funds for AI oversells the “con” and ignores history.
The Wright brothers and Apollo, speculative ventures, birthed aviation and tech like solar panels.
AI’s hype funds real R&D—AlphaFold aids drug discovery. Verne’s tales inspired progress despite inaccuracy. Yes, hype misleads, but it drives labs and breakthroughs.
Stifling speculation halts progress. History proves futurism’s value outweighs its flaws.
Ok, silly article – but as he wants to talk of past and future inventions back in the old days, lets take one of the largest inventions in all time. The Sturrup.
The Mongols invented it and Europe had never thought of this tech device – and so they swept in, in the 1200s, and by being able to fight and shoot in the saddle at full speed; they could shoot going forwards and backwards and sideways and grapple with others mounted. This enabled them to go from the Pacific to Hungary slaughtering and enslaving everyone – just a small number of peasant herdsmen.
That was a scary invention, now just think of AI….. wow….. we haven’t got a chance.
It is interesting that a lot of the visions of writers such as H.G. Wells actually materialized in the first half of the 20th century. However, it surprises me that very few people seem to notice that the predictions made bij ‘futurists’ in the past 50 years were almost always overestimations. You could also say that our progress is not as fast as we thought or that we might even have lost our way. There are scientists who tried to quantify progress, like the physicists Ted Modis. His model – based on complexity – shows that progress follows an ‘S-curve’ and is only exponential temporarily. Progress slows down in the absence of paradigm shifts. Currently, with Moore’s law, we see that his model might be correct.
I recently finished an attempt to write about these phenomena myself in the context of our contemporary society:
https://drsnyder.substack.com/p/ai-hysteria-in-an-age-of-stagnation
https://drsnyder.substack.com/p/part-i-on-declining-progress
People do not appreciate that today’s GPT, accessible right now from your browser, is simulating synthetic intelligence so effectively right now that it today embodies many facets of AGI. While it’s not Super Intelligence, by definition, it’s smarter than you (unless your IQ is in the top one percent of the one percent) and it’s memory and breadth of knowledge is as wide and deep as almost all of mankind’s written and increasingly audio visual work… You are kidding yourself if you don’t think the AI world has not already dawned…
What do you mean by “smarter”, Matt? Even a state-of-the-art AI today is as dumb as a box of rocks without access to human-produced content. Or AI-produced content which is itself derived from human-created content.
Why is AI smarter than 99% of the human population “by definition”? It is faster, can crunch massively more data, blindingly quickly, but that has been true of computers since at least Alan Turing. What’s the difference qualitatively (as opposed to quantitively)?
You and I are as dumb as a box of rocks if we were not standing on the shoulders of accumulated human intelligence….
If everyone reading here does not know what an ‘AI Agent‘ is, and how in 3 – 5 years they will take most of the Professional Jobs * – they are asleep at the wheel, and the truck IS headed to a cliff edge…
*Doctor, Lawyer, Engineer, Marketing Executive, Accountant…………
It really isn’t. If a LLM cannot accurately tell me how many R’s there are in strawberry it’s IQ is pretty low. In fact, as it’s just guessing what the next character should be, I would say it’s IQ is 0.
Then you are doing yourself a disservice in not really experiencing what it is today and what it can do… Your indirect characterization of AI as a stochastic parrot is sadly misguided based on where AI is today (which is light years further ahead than even 3 years ago….
It really isn’t. It’s just pattern recognition. That will be useful for sure in serveral niche cases, but it’s not intelligent. A five year old child can observe a painting and proclaim “I am a tiger!” (An imaginative leap no computer can do) . A computer has no knowledge of right or wrong, or indeed of any h7man concepts. ChatGPT is designed to give you an answer that a human will most likely believe based on the previous question. It will happily tell you black is white if you want.
So will a child and so will I if you incent me to do so…. Here – I say white is black and black is white… Also, based on giving GPT 4o human IQ tests it has scored > 170 and counting… So what if it’s a simulation, judge it by its output, not how it got there… So much whistling past the grave yard…
How it got there is absolutely the point. It can only get anywhere if we have human output to feed it with. And don’t rely on IQ tests to calibrate software, the whole concept is out of the marketing department.
AI agents are already writing about 80% of the code that used to be done by people. Could a 5 year old child do that?
I was generally with the author ( basically “ beware of the modern day snake oil salesman”) until the very final paragraph . It is fallacious to believe that economics works as a zero sum game in that way. Of course if a Government spaffs away billions in dumb AI projects that will affect its spending programme and could result in raised taxes – but that ain’t necessarily so and most of the speculative capital will be privately sourced surely?
This is not to say that crazy futurism cannot do great harm. See Net Zero.
It was a silly ending, that every:
”’Speculative futurism — and our cultural obsession with its offerings — is a boon for those seeking more funding or support for glitzy projects like ending ageing, colonising Mars, or creating superintelligence. But every dollar invested in these questionably feasible pursuits is a dollar not going to support education, public health, and other more immediate “boring” needs.”’
Those $$ supporting education and public health (covid vax and the rest – and that education is so bad that whole cities like Baltimore do not have one student exceeding the minimum Maths score to 99% failing it.) tend to be either go to Corruption, or to some postmodern twisted stuff that sets things back
Late 1800s – Laura Engles became a school teacher at 15, (Little House On The Prairie) actually licensed by the State, and every student of hers spelled perfectly and did flawless arithmetic – let alone ‘Diagraming Sentences’ and quoting great Statesmen and giving historical dates – basically for pennies a day on education. They also did not have books on ‘pan-genderism in sexual needs’ on their reading lists. Nor did half the kids have some ‘Long Term Disability’, obese and on the autistic scale.
Messrs. Watney and Agar want us to stop dreaming. I would rather to live in a world where man reaches for the stars, than one where he sullenly contemplates his naval. Watney and Agar are just another pair of intellectuals who don’t trust us commoners to recognize baloney. But I do and there is a big heaping helping of it in this silly essay.
“Might AI vastly outperform human civil servants in devising public policies and administering government services?”
Er, my cat could do that given a bowl of milk and a stroke…
Also – it will not bite your hand
It’s looking obvious (to me anyway) that all the DOGE theatre and the DEI/woke waste nonsense preceeding it are and have been stages in a ‘problem, reaction, solution’ scenario in which the ‘solution’ will end up being government control of the populace by AI/AGI.
Any investor who buys into a company that promises to do something in the future should consider the possibility that the projected timeline might be exaggerated or not achievable at all. Take everything that is said by promoters with a large bucket of salt.
I think this article raises good points that should serve as a reminder to investors to look beyond the hyperbole and avoid FOMO.
And Why did he not mention ”Trends Journal” the magazine which reports the future today? They are always pretty much correct. (Gerald Celente – check him, out on youtube, if for nothing else, for a good dose of Outrage and bad language, as the future, according to Gerald is Always All %*^££”^&ed Up.)
With fear and trembling, I beg to suggest that the Sun really does rise in the east and set in the west. I even had to check TimeAndDate.com, worried that MCI had arrived early. But I’m in Australia – is it a northern hemisphere thing?
I think Mars rotates the same way as Earth as well, so this can’t even be blamed on a sub-editor.
Thank you for your reassurance.
What do you guys know about real things – even water goes down a drain all wrong down there, so the sun probably does rise in the West and set in the East.
When it looks like the Sun is rising in the west, it’s probably a bushfire.
The real danger of AI, which is a near certainty, is that the majority will no longer have the ability to think for themselves. All written communication will be AI driven and the human mind will gradually lose the ability to write with creativiy and originality.