X Close

Who will save humanity? Far worse things than Covid are on the horizon

This dangerous creature will end up destroying us all. Photo by EUGENE HOSHIKO/POOL/AFP via Getty Images

This dangerous creature will end up destroying us all. Photo by EUGENE HOSHIKO/POOL/AFP via Getty Images


June 8, 2021   7 mins

During Dominic Cummings’s testimony the other week, someone had a thought: there just isn’t much capacity in the UK Government for thinking about “mad shit that might never happen” but which would be terrible if it does.

Something like a pandemic, for instance. A hundred years ago, in 2019, the idea that the world would soon be brought to a standstill by a virus would have seemed like a science fiction movie. Sure, a few Cassandras in the infectious disease community might have warned against it, but for most of us, it was just not a serious consideration.

But now we have had an obvious corrective to that attitude. Pandemics might be unlikely in any given year, but if there’s only a 1% chance per year of something terrible happening, it’ll probably happen in your lifetime.

Only preparing for another pandemic, however, would be getting ready to fight the last war. We need to think about what other horrible disasters we might expect in the coming decades. Luckily, last week, a think tank called the Centre for Long-Term Resilience released a report into the most likely extreme risks that humanity faces, and what Britain in particular can do to prepare for them.

Covid-19 has cost millions of lives and tens of trillions of dollars so far — but it could have been a lot worse. The extreme risks that the report is talking about range from those that kill 10% or more of the total human population, to those that kill every last one of us. And it suggests that the two most likely causes of a disaster of this magnitude are bioengineered pathogens, and artificial intelligence.

Even now, that might sound like science fiction, especially the idea of AI. We picture AI going wrong as being like The Terminator: an intelligence achieving consciousness and rebelling against its masters. But that’s not what we ought to worry about, and to illustrate that, I want to tell you about a marvellous little paper published in 2018. It described building AIs through digital evolution.

Digital evolution is exactly what it sounds like: a bunch of machine-learning programs are asked to come up with solutions to some problem or other; then the ones that do best at solving that problem are “bred”, copied repeatedly with small, random variations. Then those new copies try to solve the problem, and the ones that do best are again bred, and so on, for thousands of generations. It’s exactly the same process, of replication, variation and competition, as biological, “real”, evolution.

The paper was basically a series of anecdotes about how that process had gone wrong in surprising ways. In each one, essentially, the AIs had learnt to game the system, often with disastrous results.

For instance, one task involved locomotion, featuring a 3D simulated world with little 3D avatars. The avatars were told to travel from point A to point B as quickly as possible, the programmers wanting the system to discover clever ways of travelling: would it breed snake-like belly-slithering? Hopping like a kangaroo?

But what actually happened was that, “instead of inventing clever limbs or snake-like motions that could push them along (as was hoped for), the creatures evolved to become tall and rigid. When simulated, they would fall over.” Essentially, it created a very tall tower with a weight on the end, standing on point A. When the simulation started, the tower fell over in the direction of point B. It had achieved the task, but … not exactly how its creators hoped.

There were other, rather scarier ones. One was given some text files, and told to create new text files as similar as possible to the originals. The various algorithms were trying, and doing moderately well, when suddenly lots of them started returning perfect scores all at once — because one algorithm had realised that if it deleted the target files, it (and any other algorithm) could just hand in a blank sheet for a 100% score.

And one was supposed to play a version of noughts and crosses on an infinitely large board. But it realised that if it played a move hundreds of billions of squares away from the centre, its opponents would have to try to represent a board billions of squares across in its memory: they couldn’t do that, and crashed, so the cheating algorithm was declared the winner.

The point is that when you give an AI a goal, it will try to achieve exactly that goal. Not what you wanted it to do; not what any halfwit would obviously understand you meant it to do. Just what you tell it to do.

When people worry about AI going wrong in disastrous ways, that’s what they’re worrying about. Not about The Terminator; not about AI “going rogue”. They worry about AI doing exactly what you told it to do, and being extremely good at doing what you told it to do, when “what you told it to do” is not what you actually wanted.

These AIs were just toys; when they go wrong, it’s funny. But if you have a much more powerful AI, with commensurately greater responsibilities — running a power grid, for instance,  driving vehicles or commanding military ordnance — it would be less comical. When you have a really, really powerful AI, it could be disastrous. In an illustrative but perhaps unrealistic example, if you gave an enormously powerful AI the goal of “ridding the world population of cancer”, it might come to realise that biochemistry is difficult but that hacking the nuclear codes of ex-Soviet states is quite easy, and a world without humans is a world without humans with cancer. You might think you could just switch them off, but since the AI would know it would be less likely to achieve its goal if it was switched off, you might find that it resisted your efforts to do so.

The Centre for Long-Term Resilience report, co-authored by Toby Ord, a professor at the University of Oxford’s Future of Humanity Institute and the author of a marvellous book about existential risks, argues that right now we have a rare opportunity. After the Second World War, both the world and Britain took advantage of the disaster to build new institutions. In the UK, we created the NHS, and a comprehensive welfare state based around a system of national insurance. Worldwide, we helped build things like the World Bank, the UN, the International Monetary Fund. This was possible, they argue, because the scale of the recent tragedy was fresh in people’s minds, and there was a willingness to take drastic, difficult steps to preserve long-term peace and stability.

Now, they argue, we have the opportunity to build similarly vital new institutions in the wake of the Covid-19 pandemic. There will, Ord et al hope, be enough public will now to get ready for the next disaster, even though government and democracy in general is not brilliant at thinking about long-term risks. The risk, of course, is that we will prepare brilliantly for the thing that has already happened, getting ready to fight the last war, and Ord says “We need to look beyond the next coronavirus.”

A few years ago I wrote a book which was (partly) about existential risk. The people I spoke to said, just as Ord et al’s report does, that the two things  most likely to cause the human species to go extinct are 1) bioengineered pandemics and 2) artificial intelligence. (Climate change and nuclear weapons are very likely to cause awful disasters, but less so to literally drive us extinct.)

The world shouldn’t need too much convincing of the possibility of a bioengineered pandemic, not least because there is growing support for the “lab leak” hypothesis. But even after Covid, it may be that AI seems too much like science fiction. People are happy to accept that it will cause near-term problems, like algorithmic bias, but the idea that it could go really, disastrously badly wrong in future is harder to swallow.

But we should have learnt from the pandemic that it is worth preparing for unlikely, but plausible, disasters, especially as AI researchers don’t think it’s that unlikely. Surveys in 2014 and 2016 asked AI researchers when they thought the first “human-level” AI — an AI capable of doing all the intellectual tasks that humans can do — will be built. The median guess was that there’s a 50% chance by 2050 and a 90% chance by 2075. And what was really interesting was that those same researchers thought there was about a one in six chance that when true AI does arrive, the outcome will be “extremely bad (existential catastrophe)”. That is: everybody dead.

I’m faintly sceptical of the surveys — only about a third of people responded to them, and they may not be representative of AI researchers in general — but even if the results are off by an order of magnitude, it means that AI experts think there’s a greater than 1% chance that the world will be devastated by AI in my children’s lifetimes. Certainly I spoke to several AI researchers who thought it was worth worrying about.

Ord et al have some simple prescriptions for how to be ready for the next disaster. On future pandemics, they suggest creating a national body dedicated to monitoring and preparing for biological threats; and they suggest improving “metagenomics”, sequencing technology which takes a sample from a patient and sequences the DNA of every organism in it, before comparing it to a database of known pathogens. If it detects a dangerous or unknown one, it will alert the medical staff. The UK’s world-dominant position in sequencing puts us in a powerful position here.

And for AI they make some relatively commonsense suggestions, like investing in AI safety R&D and monitoring, bringing more expertise into government, and (and I have to say this does seem very wise) keeping AI systems out of the nuclear “command, control and communication” chain.

More generally, they suggest setting up bodies to consider long-term extreme risks to the UK and the world, such as a Chief Risk Officer and a National Extreme Risk Institute, to think about these things on a longer timescale than the democratic cycle allows. All their ideas add up to less than £50 million a year, while in contrast the pandemic has cost the UK about £300 billion so far; 6,000 times as much. If there’s even a small chance of reducing the impact of future disasters, it is a worthwhile bet to make.

My own feeling is that they could do more to bring in the UK’s almost unrivalled expertise on this into government. Ord and his colleagues at FHI, such as the philosopher and AI researcher Nick Bostrom, are just one of several groups here who focus on the long-term future of humanity. Only the US has access to anything like as much knowledge. Dominic Cummings, for all his many faults, seemed to realise this.

As Ord et al say: there’s a window of opportunity to do this stuff now, while it’s all fresh in our minds.

A noughts-and-crosses-playing AI that makes its opponents crash is, as I say, kind of funny. But at a fundamental level, a much more powerful AI that can command military theatres or civil engineering projects will be similar: it will only care about the things we explicitly tell it to care about. I don’t think it’s silly science fiction to worry about it, or about bioengineered pandemics. We have a chance, over the next year or so, to make those disasters a little bit less likely. We should take it.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

59 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
hayden eastwood
hayden eastwood
3 years ago

Great article Tom, thank you.

There is one obvious risk that no one is willing to talk about owing to the taboo nature of it: population growth in Africa.

Every continent except Africa is due to stabilise by 2050 except for Africa, which is due to double in size in the same period, and then double again by 2100.

There is already an immigration crisis in Europe born of tens of millions of people trying to escape poorly governed, environmentally ruined African countries.
Those same African countries will have to see economic growth absorbs a billion people in the next 25 years, or those same billion people will try to leave for different countries.

Given that job creation will be poor (owing to automation making the Chinese sweat-shop model impossible to replicate) and that governance will likely deteriorate, the likelihood is that the number of fleeing, uneducated, economically unproductive people is set to increase exponentially.

The problems that mass migration brings are already haunting developed nations and will get worse in ways that are entirely predictable but never discussed, much less addressed.

Last edited 3 years ago by hayden eastwood
Simon Newman
Simon Newman
3 years ago

Technically not a risk to ‘humanity’, just a partial replacement of one population group by another.

Jon Redman
Jon Redman
3 years ago

There’s a natural control mechanism for this; it’s called famine. It happens all the time in Africa.

Janice Mermikli
Janice Mermikli
3 years ago

True. This was the subject of the dystopian novel “Le Camp des Saints”, by Jean Raspail. Written in 1973, it was extraordinarily prescient, and a warning about what might be to come.

Seb Dakin
Seb Dakin
3 years ago

I remember reading an article about the Fermi Paradox a few years ago, which basically made it clear that as a numbers game, there is almost no chance that Earth is the only planet life arose on. So where is everyone else? A rather convincing hypothesis was that there is a point beyond which no civilisation goes, the Great Filter. That some technological advance is one too far.
(Crossed fingers) it wasn’t nuclear weapons that did for us… this time round at least it was a cold virus that was given ‘gain of function’, not anthrax or the bubonic plague…
But AI at the speeds envisioned for quantum computing…and bear in mind in 50 years the internet of things will be a bit more impressive than your fridge telling your iPhone you’ve run out of eggs…well, it’s amazing until it really, really isn’t.
This taxpayer has no objections to paying some intelligent people to really give some thought to the implications.

Seb Dakin
Seb Dakin
3 years ago
Reply to  Seb Dakin

Ps that article is on a site called waitbutwhy.com by a chap called Tim Urban, and is a fascinating read.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Seb Dakin

waitbutwhy is great – and the two-article piece on AI on the site well worth a read. For anyone interested in a bit more science and depth on this topic, I suggest head for Bostrom’s website.

https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

https://www.nickbostrom.com/

Last edited 3 years ago by Prashant Kotak
J StJohn
J StJohn
3 years ago
Reply to  Seb Dakin

This taxpayer wants the money he earned for the benefit of his children, taken from him at gunpoint by the taxman, to be spent for the benefit of his children and their friends. Thinking about the future is what we all have to do to win. destiny winnows success from failure in that department. The idea that the kind of people who thought spending this money on ‘blue streak’, are the right kind of people to spaff this money up this weeks chosen wall demonstrates a baffling faith in people spending ‘other peoples money’ on ‘ different peoples priorities’

Jon Redman
Jon Redman
3 years ago
Reply to  Seb Dakin

Surely the answer to the Fermi Paradox is that it’s the wrong question. It’s not where is everyone, it’s when was everyone.

Janice Mermikli
Janice Mermikli
3 years ago
Reply to  Jon Redman

Good point!

Kathy Prendergast
Kathy Prendergast
3 years ago
Reply to  Seb Dakin

Considering it has a survival rate of around 99 percent, and kills mostly people long past their reproductive prime, I doubt Covid-19 ever posed much danger of causing our species’ extinction.

Tom Krehbiel
Tom Krehbiel
3 years ago
Reply to  Seb Dakin

The hypothesis that civilizations may make too much technological progress for their own survival is indeed an intriguing one. But mightn’t there be a simpler explanation for no one showing up on our planet yet? (Well, no one fully documented by convincing evidence anyway.) To wit, the sheer vastness of the universe is the reason for our paucity of visitors. After Sol, our next nearest stars are the Centaurian trio, and they’re over three light years away. The energy and time it would take to reach a planet in the Centauri solar system would be tremendous. Indeed, it’s beyond our present capabilities. And once we get there, there’s no guarantee that we’d find highly intelligent life, or even any multicellular life.

Perhaps some extraterrestrial society has solved the problems we currently encounter to make the Star Trek dream of a continuous interstellar voyage practical. But then again, it may be simply impossible.

Peter LR
Peter LR
3 years ago

Surely, a great known risk which would kill a huge proportion of people is antibiotic resistant organisms. It’s only a matter of time before resistance is achieved even in ordinary pathogens as our present antibiotics are used in greater measures. With population increase and use in animals our present ones are more likely to be exposed to resistant forms. How much international priority is being given to discovering newer more powerful antibiotics? To die following routine surgery, injury or infections would be tragic.

Stephen Crossley
Stephen Crossley
3 years ago

 The article implies a co-ordinated, government-level approach to the development of AI capable of controlling the spread and direction of AI research within which companies would be allowed to operate within strict boundaries. The naivety of this view is astounding. Laws in this area rely overwhelmingly on the ability of US politicians to understand the technology, many of whom still rely on their children to print out their emails for them.
Some of the most effective and advanced AI systems running today are those already in place within Facebook et al. Having set the goal of the system as profit maximisation for the company (not society), their algorithms quickly discovered that this was most easily achieved by modifying the thoughts and behaviours of its users (already more than half the world’s population). Using established psychological techniques to establish dependence, promote low self-esteem and disengagement from the real world they don’t need or want to destroy humanity but rather to modify our behaviour in the most profitable way for them.
Examples of predictable AI tactics to maximise profits for their owners already under way:
Persuade people that democracy is not such a great idea
Create division between races, countries, genders etc
Downgrade the sanctity of free speech
Promote mistrust of traditional sources of information and authority
Governments are reliant on the tech giants for funding and to sustain their economies. The only way to alter the direction of travel is to exercise economic leverage by mass boycotting of the tech companies and social media in particular. Small scale attempts to do this have failed because there are always more addicts than boycotters.
The perfect AI system needs time to achieve its goals and must appear to be non-threatening for as long as possible. The likelihood of military chiefs allowing AI to control nuclear launch systems is zero precisely because the danger is so obvious.
The perfectly dangerous AI system is already embedded in our every day lives and for many is now their closest friend.  

Janice Mermikli
Janice Mermikli
3 years ago

Excellent post.

Jos Vernon
Jos Vernon
3 years ago

An ‘AI’ is not intelligent in the way most of us would recognize intelligence. It does not ‘realize’ things any more than Word realizes you’ve spelt a word wrong.

I used to work in this area and these systems would be much more accurately termed knowledge based or machine learning or novel pattern matching. They are completely different from intelligence as we define it. Indeed much as the value of an abacus is that it is different from a person. Similarly the value of these systems is that they can offer a lever – a tool – not a replacement.

True AI is still as far off as ever. No doubt at some point someone will discover the algorithmic pixie dust that will make magic but the idea that it is out there now is just nonsense.

Yes it’s worth considering what we should do if someone creates an AI singularity because it would, as Tom points out, have far reaching consequences. However many discussions assume that an ant like singularity would quickly evolve into a superhuman intelligence. I’m not sure this follows. The world is a big place and natural selection has a lot of processing power.

Last edited 3 years ago by Jos Vernon
Simon Newman
Simon Newman
3 years ago
Reply to  Jos Vernon

Good comments. I get a lot of students writing Law dissertations on the legal implications of AI. They tend to focus on legal personality for AI, AI ownership of patents and suchlike – which to me is a huge misunderstanding of what these systems actually are.

Mangle Tangle
Mangle Tangle
3 years ago
Reply to  Jos Vernon

Spot on. There’s so much hype (by vested interests, of course) about the idea that true intelligence/personality will somehow spring into being if things get sufficiently complex (vague chatter, but semi-convincing). That stuff’s rubbish, I suspect, though one can’t rule out the accidental discovery of true AI in the future, perhaps though some dystopian synthesis between lab-bred biological brains (ethics won’t prevent this, especially in places like China) and extended complex networks (I bet Musk’s already dreaming about this). But all that’s quite different from the ‘AI’ stuff referenced in the article. I suspect that if we ever do develop true AI, we ought not to call it artificial intelligence, because in a way it will be quite natural.

Galeti Tavas
Galeti Tavas
3 years ago

I would have made creating AI the same crime as making Nuclear bombs or bio weapons, or chemical weapons in your shed or university.

My theory has always been that SETI’s (search for extraterrestrial intelligence) massive radio antennas never find anything because soon after a species begins to send out much electromagnetic radiation it becomes extinct. WWII began the huge emitting of such on earth, 75 years ago. before 100 has passed my guess is we all will be paperclips.

from a si-fi story of a paperclip manufacturing company develops a new, smart, program for its system to improve paperclip making efficiency. 500 years later the giant paper clip making machines head off into interstellar space looking for another solar system, and more matter, to make into paperclips as their job is done back at Sol.

Terry Needham
Terry Needham
3 years ago
Reply to  Galeti Tavas

To make explicit what I think you are saying with your paper clip example, is that we will create A.I. even without intending to do so.- Where is the harm in making paperclips? So all technological advance leads to A.I. and extinction. Do I read you correctly, or is there a form of technological development that can avoid this fate?
In passing, I am reminded of a story by Philip K D!c called The Screamers. I read it a long time ago and so may have over-remembered it, but the final outcome suggests that self-replicating machines become so like the humans that they are designed to kill, that they become indistinguishable from their creators. So maybe you and I are the outcome of an A.I. programme that went a bit pear shaped a long time ago and over-delivered. This of course would imply that there might be a future, but just not the one that we were looking for.
P.S. Why hasn’t the paperclip machine found us yet?.

Janice Mermikli
Janice Mermikli
3 years ago
Reply to  Terry Needham

Perhaps it has, and is lurking within striking distance somewhere in the cosmos, biding its time….

Terry Needham
Terry Needham
3 years ago

Well, I am glad that someone is on my wavelength!

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Galeti Tavas

The ‘paperclips’ scenario is yet another Bostrom thesis, and one I’m less than convinced by. No adaptive entity keeps doing the same thing ad infinitum, and the AI entities we create will be nothing if not adaptive. And any set of machines built dumb enough to do nothing other than follow their original set goal, would ipso facto pose no threat because they would be easy to counteract. The point is, goals change over time – that is the very essence of adaptive entities – after all humanity is doing so in front of our very eyes, so why should the AI we create be any different?

Those with an interest in obscure but quirky SciFi might remember Lexx, a TV series from around 1997. The ‘paperclips’ idea is very reminiscent of Mantrid drones. Mantrid was a bio scientist, whose being is accidentally transformed when he becomes merged with the DNA of a giant insect, and who then becomes mad and attempts to convert the entire universe into Mantrid drones – self-replicating flying single robot arms. That sounds rather dreary, but the series was actually very entertaining because of the humour.

Last edited 3 years ago by Prashant Kotak
Simon Newman
Simon Newman
3 years ago
Reply to  Prashant Kotak

any set of machines built dumb enough to do nothing other than follow their original set goal, would ipso facto pose no threat because they would be easy to counteract.”
It helps to think of the dumb self-replicator as a disease, not an intelligence. Diseases are mindless but can still be very dangerous.

David Fitzsimons
David Fitzsimons
3 years ago
Reply to  Simon Newman

Your disease metaphor is very good.
I expect Tom Chivers to write about nanotech soon – why stop at AI? (But I won’t be able to comment – suffice it to say that that party started sometime ago too.)

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Simon Newman

Diseases are mindless but can still be very dangerous
For now. As with most things humanity touches, sooner or later we will pose a danger to them but they will pose no danger to us.

Janice Mermikli
Janice Mermikli
3 years ago
Reply to  Prashant Kotak

I love it when they become mad!

Jos Vernon
Jos Vernon
3 years ago
Reply to  Galeti Tavas

Well the pinnacle of AI at the moment – the product of tens of billions of dollars – is to follow us round on the internet offering us adverts for things we have just bought.
At the point that Netflix starts suggesting shows you actually want to watch, perhaps that is the point at which you should start to be concerned.

Fred Dibnah
Fred Dibnah
3 years ago
Reply to  Jos Vernon

Surely the known pinnacles are alpha go, alpha 0, and the alpha machine that predicts how proteins fold.

Jos Vernon
Jos Vernon
3 years ago
Reply to  Fred Dibnah

I think Google is much more interested in money than Go.

Sarah Johnson
Sarah Johnson
3 years ago
Reply to  Jos Vernon

Well the pinnacle of AI at the moment…
Stop right there. The entire point of the AI risk debate is that AI *in 30 years time* might be very different and much more dangerous from AI today. So it would be a good idea to start thinking about how to handle it and what not to do. What is the AI equivalent of “for God’s sake you **** don’t do gain of function experiments on viruses!” Wouldn’t it be nice if we saw the mistake coming and didn’t make it instead of learning the hard way?

Jon Walmsley
Jon Walmsley
3 years ago
Reply to  Galeti Tavas

Or, you know, the universe is a big place – it’s hard to cover everything.

Jon Redman
Jon Redman
3 years ago

Climate change and nuclear weapons are very likely to cause awful disasters

It’s really sweet that you still believe in climate change after the institutional lying by scientists that you’ve been writing about over the years, Tom. Why do you still breathlessly believe climate psyentists, when you don’t believe the WHO or psychologists? They’re the worst clique of leftist liars of the lot.

Peter LR
Peter LR
3 years ago

“ It’s exactly the same process, of replication, variation and competition, as biological, “real”, evolution.”
Not really as selection is being manipulated by intelligence which has an end target in view. Sounds more like intelligent design! It’s the same process used in creating new dog breeds.

Jonathan Weil
Jonathan Weil
3 years ago
Reply to  Peter LR

…and European royalty.

Simon Newman
Simon Newman
3 years ago

“Artificial Intelligence” is a huge misnomer – of course it makes people think it’s about self aware human-like artificial brains. It’s actually machine learning through a form of natural selection. No self awareness, no intelligence as normal people understand the term. As the article hints at, the risks are much more akin to those of bioengineered pathogens. A Paperclip Maximiser is much more akin to Covid-19 than to HAL-9000.

Jonathan Weil
Jonathan Weil
3 years ago

Hard to believe this article doesn’t mention China’s ongoing drive for AI dominance. If AI is an extinction-level threat, then nothing we do in terms of responsible regulation is going to make a blind bit of difference as long as a reckless/incompetent and mendacious regime of enormous power and resources is going at it full throttle… is it?

Jon Redman
Jon Redman
3 years ago

if you gave an enormously powerful AI the goal of “ridding the world population of cancer”, it might come to realise that biochemistry is difficult but that hacking the nuclear codes of ex-Soviet states is quite easy, and a world without humans is a world without humans with cancer. 

Wasn’t this a plot point in 2001: A Space Odyssey? HAL, the AI, was programmed to process information accurately, but also to conceal the purpose of the mission from the crew, which entailed not processing information accurately, i.e. lying. This placed him in a dilemma that he figured he could solve by killing the crew. There’d be nobody he had to lie to and he could then complete the mission himself.
It was a plot device to ensure that the final monolith / human meeting was with one human. You’d never send a crew of one on such a mission so the crew had to be killed off.

Last edited 3 years ago by Jon Redman
Ian Bond
Ian Bond
3 years ago

This is a problem with setting and measuring to a target – not a problem with AI. Humans exhibit precisely the same behaviour.
For example, when Soviet hospitals were measured according to the numbers of patients who died in the hospital, one improved its score by spotting patients who looked close to death and wheeling them outside so that they could die out on the street.
Many UK schools know that the best way to improve your exam achievement averages is to find ways of not entering weaker pupils for exams in which they are likely to do poorly.
Anyone who has worked in government – or indeed any senior businessperson with personal incentives to improve the share price or achieve targets for return on capital, etc., knows that the ingenious mind can always find an easier shortcut to meeting the target than the ‘real’ way that the target setter was hoping for…

Last edited 3 years ago by Ian Bond
John Standing
John Standing
3 years ago

Who is the “we” of this piece? Is it the British state? The country? The people in Britain? Or the natives of this land, to whom Mr Chivers and Mr Ord belong?

Last edited 3 years ago by John Standing
David Fitzsimons
David Fitzsimons
3 years ago

I’ll come back and read the rest of the article after pointing out that the authors of the paper on ‘Digital Evolution’, 2018, seem to have arrived late to the party. John Holland invented the genetic algorithm in 1975. We have benefited from it in the intervening period.
I’m quite certain that some researchers have plugged genetic algorithms and neural nets together since then (combine inter-generational selection and intra-generational learning, as in the real world). I have thought of doing so since 1995 but I don’t have the time or the full skillset.

mike otter
mike otter
3 years ago

I still can’t see how humans or anyone else can create “artificial” minds or consciousness, because if a being is conscious its intelligence ( be it mathematical, emotional or whatever) is not artificial, its real.

Stewart B
Stewart B
3 years ago

Does the world really need more existential angst?
If so there is no better formula than pondering all the different ways humanity could come to an end and then whipping up enough anxiety to try to precipitate action.
What a wonderful vision of the future, one where we dedicate ourselves to predicting and avoiding catastrophe.

Johnny Sutherland
Johnny Sutherland
3 years ago

I just love reading articles and comments about dangerous AI from people who apparently know little about it. Combines two of my favourite types of books – humor and (why not) humor

David Simpson
David Simpson
3 years ago

There is one serious problem that has been underway for some time – the shrinking of the human brain. Our brains today are already about 10% smaller than Cro-Magnons, as we have outsourced significant challenges to complex societies and technologies. A very current example is the use of Satnavs – people who rely extensively or completely on satnavs make no use of their hippocampus; whereas London black cab drivers who have done the “knowledge” have an enlarged hippocampus. A similar process affects memory – as more and more information which we used to keep in our heads has moved into books, and now computers and smartphones, our innate capacities are reducing. The end of this process, assuming nothing goes horribly wrong to interrupt it and throw us back to the Stone Age, is that we evolve into helpless vegetables entirely dependent on the technology that surrounds us, most of which most of us have absolutely no understanding of, and therefore no ability to either fix it if it breaks, or even turn it off. A Brave New World indeed.

David Brown
David Brown
3 years ago

It seems that AI is “The Monkey’s Paw”.
And, perhaps more pertinently: “I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.”

Jim McNeillie
Jim McNeillie
3 years ago

I’m not so much worried about artificial intelligence as I am about us programming systems that are too complex for our own intelligence, and then giving them autonomy. They don’t have to be “conscious” or malevolent – they would just follow the faulty programming (at lightning speed, of course).

Prashant Kotak
Prashant Kotak
3 years ago

The tech-driven maelstrom currently underway will likely to force humanity into a corner where we have no choice but to metamorphose ourselves, just to survive in some form, in order to avoid being superseded by our own inventions, be it us injecting machines into us or us injecting us into machines (and I include biotech – hacking our own genetics, in this).

I am however inclined to ask: just how much of us (as in humanity in it’s current form) will still be retained once we start re-engineering ourselves, and for how long. Ultimately, that which emerges from all this will bear no more resemblance to us than we do to an amoeba – we will share *something*, but not much. The truly spine-chilling thing about these scenarios from my perspective is that they look to be as little as half a dozen decades away, certainly no more than a dozen decades away…

And, well, the possibility of enhancing humanity by incorporating electronic algorithmic extensions into us is potentially fraught with risks in the first instance. Given the impedence mismatch between biology and electronics, it may be the equivalent of strapping a jet engine to the roof of your Ford model T – might end up like something from a Wile E Coyote cartoon, frantic, uncontrollable, and with a big splat! at the end. I wonder if such interfaces can survive the strain, and so, even these attempts at keeping up with our own inventions will be in a losing cause…

Joerg Beringer
Joerg Beringer
3 years ago

I think the most obvious and probable cause of our extinction now is that the gene therapies unnecessarily pushed into our arms in response to a mild plandemic will cause it.
This would also fit Primo Levi’s general thesis on this and square with our hubris and what that hubris always led and leads to.

mike otter
mike otter
3 years ago

Entertaining fiction piece but there are a whole bunch of risks which are not analysed or countered until the fan and brown stuff are in contact: Shortage of rubber – especially for medical and aviation use. Shortage of ASIC chips and also disc drives. 70+ years dumping of dioxins, inorganic chlorides and polyphenols in the biosphere (while fretting about fictional global warming) 40+ years dumping antibiotics and antidepressents and Lord knows what other narcotics into the biosphere (often through urine and faeces). Each of these can do massive damage to current human society as well as life on earth in general. It’s noticeable the vast majority of humanity will ignore or deny these risks yet waste masses of time and money on chimerical risks like global warming and covid or outright fictional ones like Ai run riot. If you are unlucky enough to be attacked by artificial “intelligence”, turn the power off and you’ll be fine.

Richard Pinch
Richard Pinch
3 years ago
Reply to  mike otter

If you are unlucky enough to be attacked by artificial “intelligence”, turn the power off and you’ll be fine.

If only. One of the less unlikely ways you might be “attacked” by an AI is when someone’s automated car decides to avoid a bad collision on the road by swerving onto the pavement, and you’re the unlucky pedestrian in the way. How exactly are you going to “turn the power off” on their moving car?

Last edited 3 years ago by Richard Pinch
Mangle Tangle
Mangle Tangle
3 years ago

There’s so much hype (by vested interests, of course) about the idea that true intelligence/personality will somehow spring into being if things get sufficiently complex (vague chatter, but semi-convincing). That stuff’s rubbish, I suspect, though one can’t rule out the accidental discovery of true AI in the future, perhaps though some dystopian synthesis between lab-bred biological brains (ethics won’t prevent this, especially in places like China) and extended complex networks (I bet Musk’s already dreaming about this). But all that’s quite different from the ‘AI’ stuff referenced in the article. I suspect that if we ever do develop true AI, we ought not to call it artificial intelligence, because in a way it will be quite natural.

Johannes Kreisler
Johannes Kreisler
3 years ago

The point is that when you give an AI a goal, it will try to achieve exactly that goal. Not what you wanted it to do; not what any halfwit would obviously understand you meant it to do. Just what you tell it to do.

When people worry about AI going wrong in disastrous ways […] They worry about AI doing exactly what you told it to do, and being extremely good at doing what you told it to do, when “what you told it to do” is not what you actually wanted.

Perhaps we should re-evaluate our concept and conduct of language then. The primary function of language being to mean what you say, and to say what you mean. To communicate. It took artificial inanimate constructs to remind us how far we removed language from its primary function.

Rob Nock
Rob Nock
3 years ago

If Dudley Moore’s character in Bedazzled isn’t a salutary lesson to us all about getting what you ask for rather than what you wanted then what is?

Jennifer Britton
Jennifer Britton
3 years ago

Good discussion of what the future holds! Thanks for the thoughtful informative essay. I consider myself warned.

Jorge Toer
Jorge Toer
3 years ago

Thanks for the comments.A big task for politicians &experts&others&history repiting ,,humans are not prepared for long term care.
So the answer is ,no one will save us.

Karl Schuldes
Karl Schuldes
3 years ago

I think our most likely cause of extinction is another massive asteroid. We know it will eventually happen, and there’s not a thing we can do about it.

Carlos Danger
Carlos Danger
3 years ago

Michael Wooldridge’s book A Brief History Artificial Intelligence put my mind at ease about the possibility of rogue AI causing the death of humanity. I worry more about an idiocracy like in the movie Idiocracy, where humans breed stupidity and civilization crumbles. “Brawndo’s got electrolytes” becomes wisdom.

Chris Milburn
Chris Milburn
3 years ago

“millions of deaths”. 3.8 million. Normally there are about 70 million deaths per year in the world. COVID deaths started over a year and a half ago. Do some math and you quickly realize that this is a flash in the pan in the big scheme of things.
1.4 million people per year die of car accidents. So COVID has been about twice as deadly as car accidents. And if other countries define a “COVID death” as liberally as we do here in Canada (which has had no significant mortality increase in 2020 compared to 2019), it’s even less impactful.
CAVEAT – yes people have died of COVID. Yes it’s tragic. Same as when people die of cancer, heart attacks, car accidents, etc.