In the last year, artificial intelligence has progressed from a science-fiction fantasy to an impending reality. We can see its power in everything from online gadgets to whispers of a new, “post-singularity” tech frontier — as well as in renewed fears of an AI takeover.
One intellectual who anticipated this decades ago is Nick Bostrom, a Swedish philosopher at the University of Oxford and director of its Future of Humanity Institute. He joined UnHerd’s Florence Read to discuss extinction, the risk of government surveillance and how to use AI for the benefit of humanity.
You can watch the full video above.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeAnyone who has tried to deal with an artificial stupid help desk or receptionist has wanted to tear his hair out. If it is not on the list of FAQS, he is lost. In addition, if the tank at Tinamin square was driven by an AI it would have killed the protester. An AI will kill without mercy if the ruler gives the order. In the past a ruler had to have 20% popular support to rule or a loyal army. A foreign mercenary guard not eligible to take part in local politics was essential, AI is more loyal than those mercenaries. An AI will nuke the capital if the ruler has a command code.
Amazed at how little coverage this is getting – so many leaders of AI industry consider this to be our number 1 existential threat and yet here we are getting angry about refugess or rowing about the definition of a woman.
In fairness, most people angry about those two things are angry because they’ve realised that our hugely expensive and bloated goverrnment, which is quite happy to come down on us like a ton of bricks for driving 21mph on Park Lane or saying the wrong thing online, seems somehow powerless in the face of anyone who simply refuses to respect to any laws at all, either of the rational kind such as knowing that men and women cannot be treated as interchangeable, or laws of the land concerning who is allowed into the country.
On the second problem, we are tolerating illegal migrants who get around the problem of potential deportation by simply destroying their identity so that we don’t know where to deport them to. That this is actually effective against our own legal institutions rightly infuriates anyone who plays fair themselves.
In both cases however, we do wonder how a system that possesses such idiotic priorities can possibly hope to deal with an existential issue such as whether or not most of us will be made redundant – or worse – before we even understand why.
5 practical things to consider about AI from the perspective of a crime-tech-demo-dummy in Melbourne, Australia:
1) The value of AI control = the weakest link in the quality of the criteria/ability of the controllers.
2) AI will be controlled by power, where might = right at least in Australia’s case. Our billionaire bikies have been flaunting their easy access to government/military-grade tech since 2009.
3) The law of unintended consequences: we won’t know what we don’t know until undeniable negative consequences are identified. Killing people without any risk of punishment via remote interference with medical systems/devices is already a major threat in Australia at least.
4) The values AI’s judgements are based on are a major worry / maybe difficult to correctly identify, let alone adjust.
5) AI is unlikely to exist long-term without humans being motivated to keep AI functional.
Incredibly interesting interview. I’ll be browsing Nick Bostrom’s books on Amazon this afternoon.