We can’t stop AI — so let’s ride the tiger
Elon Musk wants to temporarily halt the training of systems
Just for once, an open letter that actually matters. This one is signed by some of the world’s top technologists including Elon Musk (Tesla, SpaceX, Twitter etc.) and Steve Wozniak (co-founder of Apple).
These are people who know what they’re talking about — so when they say they’re worried about artificial intelligence (AI) we should listen. Clearly spooked by the rapid progress made by AI language models like GPT-3 and now GPT-4, they’re calling on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”.
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
Like William F. Buckley’s definition of a conservative, the signatories find themselves “standing athwart history, yelling ‘Stop!’”. However, that’s just the problem. Even if we do stop in the West there’s no guarantee that anyone else will — least of all the Chinese. To call for a pause in AI development now would be like President Roosevelt halting the development of the atom bomb in 1944 and expecting Adolf Hitler and Joseph Stalin to do the same.
The signatories of the letter don’t pretend that AI can be uninvented. Nor are they saying that we should just sit on our hands and hope for the best. Rather they want the pause to be used by “AI labs and independent experts” to “implement a set of shared safety protocols” to make “today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal”.
And yet it’s not just today’s technology we’ve got to worry about but tomorrow’s, too. If the equivalents to GPT-5 through to GPT-infinity are developed in China instead of the West, then we haven’t got a hope of understanding the future iterations of AI, let alone managing the risks. We therefore have no choice but to carry on riding the tiger; and though that’s not a comfortable position to be in, it’s better than getting off the beast and asking it not to eat you.
Western governments must develop the in-house expertise capable of regulating this rapidly evolving technology. For instance, they need to do a whole lot better than Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology, who this week told readers of the Sun that “AI is not something we should fear”.
How wrong can you be? We absolutely should fear this technology — even if there’s no avoiding it. Believe it or not, Britain is the world’s third most advanced nation when it comes to AI. We’re a long way behind the US and China, of course — but without the disadvantages of American gridlock, Chinese dictatorship or EU incoherence we could lead the policy response to something that will change the world.
As a matter of urgency, Downing Street needs to put the most capable ministers and officials in charge of this challenge — and to provide them with the authority to recruit far beyond Westminster and Whitehall.
Ultimately, it’s not a choice between developing and controlling AI. For the good of humanity, the two must go hand-in-hand. Let them do so in this country.
Riding the Tiger, sane or not, is the only approach left.
I’m a coder and having stared using both GPT-3 and GPT-4 in the context of my work, I can tell you it’s a gamechanger – it is going to improve my personal productivity threefold. I cannot tell you if it will replace me entirely in a few years, but I can see exactly the dynamic playing out. Let’s make a comparison with black cab drivers, ‘the knowledge’, and uber drivers. No uber driver has ‘the knowledge’, but they do have a GPS and Google maps. Technology like this creates a type of deskilling at one level, because much less cognition is needed to do a job. But the black cab driver hasn’t become any stupider – the ability is intact but has to be redirected, perhaps by keeping up a stream of free entertainment for the customers of outrageous stories and opinions. Perhaps they can learn the guitar too, and some dance steps.
People like me can easily survive, and in fact thrive even more, for now, by moving upstream, up the cognitive foodchain – the technologies will allow me to do much more complicated things without needing a team of programmers.
But for how long?
The technology is particularly suited to the needs of coders – so there’s a tendency for us to evangelise it perhaps more than we should. After all, we’re looking for the cliche, the banal solution is exactly what we need. We don’t require iconoclasm or originality, just dependable code. And since, for 99% of the time, we’re doing things that have been done thousands of times before, that’s what we get. But I don’t think that’s what creative writers, artists or musicians are looking for at all.
In other disciplines – the legal profession, medicine etc – all AI will do is make life harder for mediocre hacks. The lateral thinkers are not threatened at all.
You are absolutely correct that lateral thinkers are not threatened at all (yet), in fact the reverse, but it is instantly obvious there are a whole bunch of subtleties here, which are going to churn the entire world of cognitive work, very very fast indeed. If you haven’t already tried out Codex, I urge you to do so and you will begin to see what I am taking about. I can start discussing this stuff at length here, but perhaps better might be a more extended debate btl in the next major article – I cannot imagine a whole slew are not due soon (and if they are not that would be incredibly remiss of UnHerd, which I don’t think they will be).
The part you’re missing is what happens when lateral thinkers really start to use AI? For good and bad.
How many lateral thinkers do you think exist in many professions.
Let say 5%
So the problem is not with them but with 95% of, let say, legal and medical professionals.
I recall reading that AI is better at analysing breast cancer test results than all experts apart from few dozens in the world (who trained the system).
”People like me can easily survive, and in fact thrive even more,”
Till Chat GPT is used to hack all the banks and every account reads 0 balance. Then it is not so easy to survive….time to be hoarding gold and silver boys….
I’ve never been one for doomsday but as the years go by, things just get scarier and scarier. The genie might not be able to be put back in the bottle but I suspect that future generations might just move away from tech because it takes so much and gives so little to the average person. The plethora of little conveniences have just made us lazy, not better
Or AI launches nuclear war by accident:
A vision is conjured in my head, of Smaug, sitting atop a pile of gold….
Anyway… assuming you mean the physical stuff rather than pieces of paper held at a broker, you’d better also get yourself a moated castle, and a small private army, to protect the stuff…
I reckon you have 5, maybe 10 years, to make enough money for yourself to provide reliable life-long security. Good luck.
And better invest it in real estate and precious metals. I don’t think life-long security exists anymore.
That’s my approach too. After realising that translation (what I’ve been doing for the past 8 years) is being eaten up, leaving me with the prospect of a lifetime of post-editing (no thanks), I’m swimming upstream.
Currently moving into SEO – whether this will be outdated in several years? Who knows? But no one knows a lot at this point and the key is to just look at the current situation, make yourself a plan and act accordingly…while always being ready to switch course. The mindset to have is a balance of being “in the moment” and finding meaning and purpose in what you are doing…while keeping an eye on what changes are going on so you can figure out your next move.
What I think right now is that this technology is going to let me be far more productive: I can offer more services as a one-woman SEO show than I could have done without AI taking some of the burden.
What AI can’t do though is write compellingly. If it’s pure info you’re after, then it’s fine. But writing anything with a nice flow and which is emotionally moving? Nope. You need humans for that. Storytellers will still be in demand.
I think there will still be plenty of business for translators given AI’s inability to catch nuance in translations. They’ve come a long way but the hard work is the last 20%, and that requires an ear that no AI has come close to.
Think of it this way– if you asked an AI system to translate an Albert Camus book, what would the final product be?
Agreed – this is the only practical approach we can take – assess what is coming down the track, and get into the mindset of how to stay one step ahead of automation.
GPT-4?… wow! whats its 0-60 time? is it a bike or a car?
In my delboy accent: “It’s me new secondhand Capri Ghia”
” depart and self procreate you sample of as yet unidentified mammal stool”
Are you signaling your virtue? What does being black have to do with driving a damn car?
Eh? A world gone gaga, where ‘black cab’ drivers, is instantly interpreted as ,’black’ cab drivers.
Well, Italy said no…
So, I understand has Afghanistan….
If artificial sweeteners, artificial preservatives and artificial flavors are not good for us, why is artificial intelligence good for us? I’m not a Luddite, but if what social media has wrought on society is any indication, I shudder to think what AI is capable of in the wrong hands.
It needn’t be in the wrong hands, it’s the success of it that ought to concern society. The power of these tools has the potential to alter human dynamics in many layers, and it has been unleashed without any forethought or consideration to these impacts, that could for example make thousands of people redundant in one fell swoop.
And my response is, how possibly could it have been different? As in, what possible type of societal state could have led to a different trajectory than the one we have got when these technologies reached the tipping point?
A Christian one.
AI is right out of Revelations 13:11-17
”11 And I beheld another beast coming up out of the earth; and he had two horns like a lamb, and he spake as a dragon.
12 And he exerciseth all the power of the first beast before him, and causeth the earth and them which dwell therein to worship the first beast, whose deadly wound was healed.
13 And he doeth great wonders, so that he maketh fire come down from heaven on the earth in the sight of men,
14 And deceiveth them that dwell on the earth by the means of those miracles which he had power to do in the sight of the beast; saying to them that dwell on the earth, that they should make an image to the beast, which had the wound by a sword, and did live.
15 And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.
16 And he causeth all, both small and great, rich and poor, free and bond, to receive a mark in their right hand, or in their foreheads:
17 And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name.
18 Here is wisdom. Let him that hath understanding count the number of the beast: for it is the number of a man; and his number is Six hundred threescore and six.”
So this world, is *literally* just one dot in an ecosystem which incorporates, the taliban, and the Spanish Inquisition and ISIS and the gods of the Greek and Roman mythologies and various similar and equally ludicrous gods of the Hindu pantheon and so on. You are not really offering that specific dot, even if you think you are.,. what you are offering is a buy-in into the entire ecosystem. The difference between the world you are offering and, say the taliban, is just one of degree, not type. It’s an outlook which says: “this is the reality I like, whats your problem?”.
It’s up to everyone to decide for themselves if this is the world they want for themselves, but it’s not something for me.
“It’s my reality, and I’ll cry if I want to. You would cry too if it happened to you”.
Haha Prash, I quoted the King James Bible and it is
Awaiting for Approval
– the AI in the algorithm wants to keep God out of the discussion – or Unherd does, haha, the new day is coming, and it looks bad….
No, it’s because you quoted 13:11-17 and then sneaked in an 18 which the AI had to check before releasing to the great unwashed. 😉
Like most of IT-development over the last 20-25 years since universal connectivity was established this will be in both the right hands and wrong hands. It’s not so much a question of AI taking over, which it may well do, it’s more an issue of continuing to dumb down the world’s population through manipulation and removal of human initiative. Whilst I’m amazed at the advances and creation of new services and information/knowledge spread which IT and Internet has brought, I cannot agree more on the social media aspects and the cancer and mental submission it’s spreading in societies and mindsets of the populations. AI will just multiply the effects and the manipulation through algorithms, dictated by random power brokers, and these effects will not be able to be controlled since they exist and expand largely outside of the control of elected governments. The exceptions will be dictatorships with advanced technologies, more so China than Russia. This is coming from a still practising IT-professional with 50 years under my belt.
This was both fascinating and alarming:
AI is a teenager who has read too many tweets. If it can successfully mimic an article in the Guardian or an academic paper then all that tells you is that the Guardian isn’t worth reading and that the output of most academics is just utterly predictable boilerplate.
Let’s not get too carried away here. eh?
Carried away?! The issue of AI-generated academic essays is of minor importance compared with what these systems are increasingly capable of.
AI systems have created successful Chess strategies that even grand masters admit they would not of thought of. There are AI systems which will write complex software code in a fraction of the time it would take human programmer. Who is to say such systems would not find creative programming solutions that would not have occured to a human?
“…Western governments must develop the in-house expertise capable of regulating this…”
There is no chance such regulation is possible. You are asking for some way to control a starburst, once the chain reaction has already started.
Totally. I am certain there were commitees and groups discussing such safe guards, but if it wasn’t possible to align anything prior to release then trying in retrospect will be utterly futile.
And of course it would require international agreement, which won’t ever happen either.
God help us. We can’t even regulate water companies.
Freezing the furthering of it for 6 months is what he called for – that is not stopping, it is pausing to get some mental picture of what is going on, as Boris would say, to flatten the sombrero, and so there can become another team hopefully thinking of bad things and antidotes.
It is saying to hold just a second before fully opening Pandoras Box to think about things which should be thought of…may not work, but may mitigate some of the coming horrors if we just try a bit..
The people calling for the six month halt sound very well qualified and sensible to me, I do not see why they would call for it if they didn’t think it was necessary. I feel like we should probably listen to them, it is better to take six months to review it than to steam ahead into the unknown at this point perhaps.
Anyone who thought that AI step-change would amble in at a leisurely pace over a few decades, was sticking their head in the sand. It’s here and now everything changes. And at the crudest level, winners and losers are already determined: if you have the mindset that can embrace the state of jarring and relentless change, you can not just survive, but thrive. If not, you will likely struggle, in the sense of living in a world which makes less and less sense.
As of now, 2023, AI is neither artificial nor intelligent. At best it is an automated echo chamber that reflects back to you the patterns in the data fed into it. It can be controlled, or more acurately, directed, by selecting the data that is fed into it. What it cannot do is make any objective analysis of the truth in the answers it gives. With computers it remains the case – nonsence in, nonsense out. The best we can do is be skeptical of all sources of information and keep questioning the assunptions used.
In the future it will become intelligent when it can find new patterns and identify the patterns and associations that it uses. But the truth in its answers will still only be true in the context of the data it has learnt. Theoretically it could widen its context to embrace all information but it will not be able to prove it has done so.
The mistake here is to conceptualise intelligence and/or consciousness entirely based on our human experience. AI does not need to achieve human consciousness to cause harm to us.
“…As a matter of urgency, Downing Street needs to put the most capable ministers and officials in charge of this challenge…”
And right there, is the real challenge. Do we have any “capable ministers” whatsoever? Do we have any capable, or even competent, politicians at all?
It’s not a party political point. They’re all as bad as each other. More or less our entire parliament breezily went along with the COVID response, with HS2, with Net Zero. These are all ludicrous, nation-bankrupting, idiocies – as is painfully obvious to most of the population. And where was the scrutiny? where was even the basic consideration?
Our political body, parliament, the Civil service, and all the departments, are completely incompetent. We truly are “led by donkeys”.
“And yet it’s not just today’s technology we’ve got to worry about but tomorrow’s, too. If the equivalents to GPT-5 through to GPT-infinity are developed in China instead of the West, then we haven’t got a hope of understanding the future iterations of AI, let alone managing the risks. We therefore have no choice but to carry on riding the tiger; and though that’s not a comfortable position to be in, it’s better than getting off the beast and asking it not to eat you.”
In other words, it’s vital that we pursue gain-of-function studies to understand and counter potential future threats. What could possibly go wrong?
The narrative of inevitability is one of the greatest lies that humans tell themselves. What people mean, when they say that something cannot be stopped, is that they don’t want to pay the price of stopping it. Yet in this case the price of letting it roll on is one we simply cannot afford.
AI is to homo sapiens is what homo sapiens was to the Neanderthals. Nemesis.
Riding the tiger puts one in mind of the scene in Stanley Kubrick’s movie “Dr. Strangelove” where Major Kong straps himself to an atomic bomb to ride it into the heart of the Soviet Union. AI is our generation’s atomic menace – the genie is out of the bottle and cannot be put back in, only managed and outwitted. Time will tell if humanity is up to the job…
Gove would be the man to be in charge. The best we have.
Yuval Noah Harari debates this in his book Homo Deus: A Brief History of Tomorrow, which is well worth a read. He speculates that a large section of society will ultimately be rendered as useless persons by the impact of AI.
We already have a large section of society as useless persons, but we make up jobs and purposes like DEI Coordinator, or Activist, or Professor of Post-Colonial Studies at Cambridge. Or we send them to University to do a pointless degree that won’t contribute anything to society, or help them become valuable and worthwhile.
Which is to say, I agree 100% with you, but it’s same as it ever was.
But who’s decided that they are “useless”? People used to develop both engineering skills and their humanities throughout whole courses of civilisations, pretty much in a parallel. Ancient Greeks had their portion of technological development, but they did have their stories, too. Stories that still inform of us of their dilemmas and feelings, and that they are universally shared in the human condition until today. Societies need humanists precisely for this: to upkeep the humanity in themselves. To quote Crichton, our engineers were so excited with the abilities of progress, with “what they could” that they didn’t consider “if they should”. I’m raising up a little human being and I can see that humans need other humans, a bit of food, a bit of warmth and fresh air, and that’s pretty much that. Technology makes some of it more readily available but the same that was here for last century for example was actually enough. In XIX century mankind went through similar rapid period of technical development, and it brought with it the decimation of non-white native civilisations, many species of other live beings, like whales, tigers, etc. and a slaughter of humans in two major wars of XX century. I’m not saying that technology is all doom, BUT I think being human means so much more than just being an operator of a computer that turns into independently the most important entity in the society, inventing tasks that are not serving humans anymore. There is a question of morality of course as well. We can put justice into hands of AI, but is it not morally wrong? And what about medicine? Education? Should we replace teachers with computers? ..Parents? Humans need humans, but what we did, what our engineers did was actually believed that technology can and should replace them. But we still give robots a human voice. And we want them as “human” as possible.
I thought it was interesting to note he signed the letter – 5th from top. Maybe the WEF is not ready for the 4th industrial revolution to kick off yet. Possibly waiting for the next “emergency” to be instigated and the CBCD to be in place.
Or maybe you have misunderstood him and his intentions? Everything is not a conspiracy.
There is a central problem with AI known as the ‘control problem’ or ‘alignment problem’. We may not be able to solve this problem. We appear to be in a Godelesque trap that some misaligned behaviour always eludes rule sets. Yet, we continue to develop AI at pace and hope we can solve this problem along the way. If we don’t solve it and AI reaches the fabled singularity, now expected to occur in the 2030s or early 2040s at the latest, we will be annihilated.
I don’t think putting the UK’s best ministers in charge is going to help somehow.
Well Stuart Russell addressed this problem at length in his fourth and final Reith Lecture in 2021 :
and came up with a series of solutions based on the idea that AI machines will be built that do not have set objectives – they will have to ask us (humans) which “preference” they should choose during a task and that they would need to ask this question repeatedly as the task proceeds. He gives some examples of how this might work in practice.
At the end of this lecture he also brought up a whole series of philosophical questions around free will, whether or not humans are the autonomous posessors of their own preferences in all circumstances; if a machine does something now that will affect you tomorrow who is it working for – the you today or the you tomorrow ?; and how to write algorithms that don’t produce content selection (as they do now).
He was an optimist in terms of regulation because ethicists and moral philosophers are involved in AI research now; because the world sort of got its act together after WWII to regulate nuclear energy, land mines, chemical warfare and CFCs ; because the EU right now is in discussions about banning AI that impersonates humans (in the EU charter of fundamentl rights there is a right to mental integrity – so all human facing AI systems involving social media could be viewed as a risk to mental integrity).
4 lectures each about 45 mins + a Q&A. Highly recommended.
Brilliant, now that is the kind of thing this scenario needs. I’ll take a look at those, thanks.
Thanks. I’ve read Russell’s book on this and I’m not entirely convinced tbh.
Incidentally, I nearly did my PhD on AI, had a supervisor and funding but changed my mind at the last moment. Glad I did now. I wouldn’t want to have AI on my conscience
It can only be a matter of time before we have self-driving cars.
AI? Artificial insemination?? Is this not what produced Felon Muskoid in the first place?
The Prisoner’s Dilemma, writ large.
Sam Harris did a great ted talk on this.
Interesting comments in the threat following the interesting question in the article.
I would propose that we look at this question from the following point: we are systems who are part of ever larger systems (family, village, region, country, biosphere, etc etc) https://isbscience.org/about/what-is-systems-biology/ Capra: the system view of life
It seems that IA is a new system ‘we’ have added to this: it will interact with all the above systems.
I propose that our way to preserve ourselves is to increase the value we put on ‘human values’ in opposition to technological values. (I have borrowed this from Iain McGilchrist: The Matter with things https://channelmcgilchrist.com/ ) note technological values make more sense in controlling, making money, laziness, ..
By valuing what makes us human we should consider how we can protect ourselves to too much technology. Difficult question we need to find the answer or at least the dynamic to.
…. for what it is worth.
Note: to see what the negative effects are of too much technology on people see where medicine is at today… huge increase in chronic illness and reduction in live expectance in western countries in the last 10+ years. High time we bring humanity back to medicine. Luckily some are trying (Integrative medicine and One health) but too many are still ignorant: please each of you go and ask/talk to you doctor about one health and Integrative medicine….. may make a small difference? see these two references: https://www.ipmcongress.com/ and https://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1010537
Join the discussion
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.Subscribe