X Close

The existential risks of AI

Credit: Ou Dongqu/Xinhua News Agency/PA Images

Credit: Ou Dongqu/Xinhua News Agency/PA Images


May 29, 2018   5 mins

There’s been a long-running discussion among sci-fi writers and artificial intelligence researchers since the mid-20th century about what will happen when artificial intelligence (AI) gets really, really, smart. Many expect a sudden take-off moment when computers become superhumanly intelligent, creating, as Peter Franklin put it last week, a “digital supreme being”. This scenario is referred to as the “singularity” – when the Moore’s Law curve goes near-vertical.1

Back in 2015, some of the top names in science and technology – including Stephen Hawking, Elon Musk, and Bill Gates – signed an Open Letter warning that the risks of rogue AI go well beyond Amazon eaves-dropping in your living room (and recording your conversation, and sending it to someone else – which happened to one couple in Portland last week). As Musk, whose business empire spans Tesla electric cars and SpaceX rockets, put it: “With artificial intelligence we’re summoning the demon.”

AI has already raised widespread fears about machines taking our jobs – although it’s a threat that governments have basically decided to ignore (U.S. Treasury Secretary Steven Mnuchin recently stated that it would be 50 or 100 years before there was any impact). Less well recognised by the general public, but still causing concern, is the increasing concentration of power in the hands of a tiny elite, facilitated by this replacement of human labour by capital (in the shape of technology). Bill Joy, co-founder of Sun Microsystems, made this point in his stunning 2000 essay “Why the Future Doesn’t Need Us.” At a practical level, it’s why there’s a rising cry among to break the growing power of the tech monopolies and their super-rich owners.

But the fundamental threat posed by AI is “existential” – it’s a threat to human existence as we know it. Because we simply have no idea what will happen if machines really do become explosively smart and “take over.” Not just filch our jobs, not just give a handful of super-wealthy individuals the whip hand, but of they become the masters of the human species. Famed inventor and Google executive Ray Kurzweil, whose door-step volume The Singularity is Near has defined the debate, is as optimistic about the pace of change as he is about its benevolent outcomes.

Mathematics professor and sci-fi writer Vernor Vinge (pronounced Vingey), who is credited with coining the term “singularity” in this sense, agrees that “it will probably occur faster than any technical revolution seen so far”, but is less optimistic:

And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Posthuman era. And for all my technological optimism, I think I’d be more comfortable if I were regarding these transcendental events from one thousand years’ remove . . . instead of twenty.2

Founding editor of Wired, Kevin Kelly, is more skeptical. A long-time critic of Kurzweil and those who think like him, Kelly punches back against the core idea of the singularity – that machines will take charge and then be able to solve all our problems, including mortality itself.  He even suggests that Kurzweil’s obsession with the singularity is at root the result of his fear of death (the computer scientists uses such stratagems as caloric restriction and scoffing scores of vitamin tablets to try to slow the ageing process). And while Kelly agrees with the singularity crowd and the Open Letter crowd that we need to take the possibility of AI risk seriously, he’s a lot less worried about a singularity apocalypse.

This critique is especially notable, since Kelly is not simply a tech enthusiast, he’s something of a fatalist about the future. His latest book, a roundup of the twelve tech trends shaping what comes next, is aptly titled The Inevitable. But in singularity he sees a “cargo cult“, based on false assumptions. He accuses singularity’s proponents of what he calls “thinkism” – the idea that human thinking is the be-all and end-all, and so if machines can do it better than we can they will solve all our problems in a burst of super-human intelligence. He calls it “the fallacy that future levels of progress are only hindered by a lack of thinking power.”

For one thing, he points out, the world does not lack for extraordinarily smart people. But curing cancer cannot come about through smart thoughts; however smart they are, it will need years of trials of treatments and medications. Moreover, there are many kinds of intelligence; it’s not just one thing.

So the question is, where is the limit of intelligence? We tend to believe that the limit is way beyond us, way “above” us, as we are “above” an ant. Setting aside the recurring problem of a single dimension, what evidence do we have that the limit is not us? Why can’t we be at the maximum? Or maybe the limits are only a short distance away from us? Why do we believe that intelligence is something that can continue to expand forever?3

In a thousand years’ time, he reckons, we’ll still be expecting the singularity to occur soon.

Concerns about AI risk have spawned a series of efforts to keep this smartest of human inventions under some kind of human control. Kelly himself calls for us to engineer our values into AIs. U.S. congressman Brad Sherman once asked the Science and Technology Committee to fund “non-ambitious AI,” and then convened a hearing on the need for a non-proliferation approach to such technologies. 4

The Hawking-Musk-Gates letter has helped boost other initiatives aimed at ensuring the safe and ethical development of AI, such as Cambridge University’s Centre for the Study of Existential Risk, the Leverhulme Centre for the Future of Intelligence, and the Berkeley Center for Human-Compatible AI.

“Welcome, singularitarians!” was how the conference chair opened proceedings at a cult-like singularity event I attended a decade back in New York City. Yet even among those most committed to the “singularity” analysis, there are clear differences of opinion, even if they are united in believing that this event will occur relatively soon (Kurzweil has suggested various dates; his latest prediction is for 2043). But they don’t all see things turning out quite as clearly as he does. Peter Thiel, long-time Kurzweil financial backer (and Facebook board member), made it plain when he addressed one recent singularity conference that he is more of a gradualist in his vision (in other words, like Kelly, he doesn’t really believe in the sudden, discrete, singularity idea at all – as some in the audience were quick to point out.) And they are by no means all as optimistic that its arrival will be good news.

So let’s give the last word to Vinge, who’s a much more interesting thinker than Kurzweil: “The longer we humans have our hand on the tiller, the better.”

FOOTNOTES
  1. The basic idea? AI development is powered ultimately by Moore’s Law – the compounding impact of advances in digital processing power. It is therefore “inevitable” that, just as machines can already calculate and remember much better than humans, their power will soon match and then surpass the capacities of the human brain as a whole. Once that happens, machines will run the world.
  2. Vernor Vinge, ‘Technological Singularity’, presented at the VISION-21 Symposium, 1993
  3. Kevin Kelly, ‘The Myth of a Superhuman AI’, Wired, April 2017; see also his blog: http://kk.org/thetechnium/the-singularity/
  4. He told me this in personal conversation, and said fellow members of the committee responded by laughing. Here is the hearing, at which I testified: https://fas.org/irp/congress/2008_hr/genetics.pdf

Nigel Cameron writes about technology, society, and the future. In 2007 he founded the Washington think tank The Center for Policy on Emerging Technologies. His most recent book is Will Robots Take Your Job?

nigelcameron

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments