There’s been a long-running discussion among sci-fi writers and artificial intelligence researchers since the mid-20th century about what will happen when artificial intelligence (AI) gets really, really, smart. Many expect a sudden take-off moment when computers become superhumanly intelligent, creating, as Peter Franklin put it last week, a “digital supreme being”. This scenario is referred to as the “singularity” – when the Moore’s Law curve goes near-vertical.1
Back in 2015, some of the top names in science and technology – including Stephen Hawking, Elon Musk, and Bill Gates – signed an Open Letter warning that the risks of rogue AI go well beyond Amazon eaves-dropping in your living room (and recording your conversation, and sending it to someone else – which happened to one couple in Portland last week). As Musk, whose business empire spans Tesla electric cars and SpaceX rockets, put it: “With artificial intelligence we’re summoning the demon.”
AI has already raised widespread fears about machines taking our jobs – although it’s a threat that governments have basically decided to ignore (U.S. Treasury Secretary Steven Mnuchin recently stated that it would be 50 or 100 years before there was any impact). Less well recognised by the general public, but still causing concern, is the increasing concentration of power in the hands of a tiny elite, facilitated by this replacement of human labour by capital (in the shape of technology). Bill Joy, co-founder of Sun Microsystems, made this point in his stunning 2000 essay “Why the Future Doesn’t Need Us.” At a practical level, it’s why there’s a rising cry among to break the growing power of the tech monopolies and their super-rich owners.
But the fundamental threat posed by AI is “existential” – it’s a threat to human existence as we know it. Because we simply have no idea what will happen if machines really do become explosively smart and “take over.” Not just filch our jobs, not just give a handful of super-wealthy individuals the whip hand, but of they become the masters of the human species. Famed inventor and Google executive Ray Kurzweil, whose door-step volume The Singularity is Near has defined the debate, is as optimistic about the pace of change as he is about its benevolent outcomes.
Mathematics professor and sci-fi writer Vernor Vinge (pronounced Vingey), who is credited with coining the term “singularity” in this sense, agrees that “it will probably occur faster than any technical revolution seen so far”, but is less optimistic:
And what happens a month or two (or a day or two) after that? I have only analogies to point to: The rise of humankind. We will be in the Posthuman era. And for all my technological optimism, I think I’d be more comfortable if I were regarding these transcendental events from one thousand years’ remove . . . instead of twenty.2
Founding editor of Wired, Kevin Kelly, is more skeptical. A long-time critic of Kurzweil and those who think like him, Kelly punches back against the core idea of the singularity – that machines will take charge and then be able to solve all our problems, including mortality itself. He even suggests that Kurzweil’s obsession with the singularity is at root the result of his fear of death (the computer scientists uses such stratagems as caloric restriction and scoffing scores of vitamin tablets to try to slow the ageing process). And while Kelly agrees with the singularity crowd and the Open Letter crowd that we need to take the possibility of AI risk seriously, he’s a lot less worried about a singularity apocalypse.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe