X Close

Nick Bostrom: Will AI lead to tyranny? We are entering an age of existential risk

How worried should we be? (Tom Pilston for The Washington Post via Getty Images)

How worried should we be? (Tom Pilston for The Washington Post via Getty Images)


November 12, 2023   9 mins

In the last year, artificial intelligence has progressed from a science-fiction fantasy to an impending reality. We can see its power in everything from online gadgets to whispers of a new, “post-singularity” tech frontier — as well as in renewed fears of an AI takeover. 

One intellectual who anticipated these developments decades ago is Nick Bostrom, a Swedish philosopher at Oxford University and director of its Future of Humanity Institute. He joined UnHerd’s Florence Read to discuss the AI era, how governments might exploit its power for surveillance, and the possibility of human extinction. 

Florence Read: You’re particularly well-known for your work on “existential risk” — what do you mean by that?

Nick Bostrom: The concept of existential risk refers to ways that the human story could end prematurely. That might mean literal extinction. But it could also mean getting ourselves permanently locked into some radically suboptimal state, that could either collapse, or you could imagine some kind of global totalitarian surveillance dystopia that you could never overthrow. If it were sufficiently bad, that could also count as an existential catastrophe. Now, as for collapse scenarios, many of those might not be existential catastrophes, because civilisations have risen and fallen, empires have come and gone and eventually. If our own contemporary civilisation totally collapsed, perhaps out of the ashes would eventually rise another civilisation hundreds or thousands of years from now. So for something to be an existential catastrophe it would not just have to be bad, but have some sort of indefinite longevity.

FR: It might be too extreme, but to many people it feels that a state of semi-anarchy has already descended.

NB: I think there has been a general sense in the last few years that the wheels are coming off, and institutional processes and long-term trends that were previously taken for granted can no longer be relied upon. Like that there are going to be fewer wars every year, or that the education system is gradually improving. The faith people had in those assumptions has been shaken over the last five years or so.

 

FR: You’ve written a great deal about how we need to learn from each existential threat as we move forward, so that next time when it becomes more severe or more intelligent or more sophisticated, we can cope. And that specifically, of course, relates to artificial intelligence.  

NB: It’s quite striking how radically the public discourse on this has shifted, even just in the last six to 12 months. Having been involved in the field for a long time, there were people working on it but broadly, in society, it was more viewed as science-fiction speculation, not as a mainstream concern, and certainly nothing that top-level policymakers would have been concerned with. But in the UK we’ve recently had this Global AI Summit, and the White House just came out with executive orders. There’s been quite a lot of talk, including about potential existential risks from AI as well as more near-term issues, and that is kind of striking.

I think that technical progress is really what has been primarily responsible for this. People saw for themselves — with GPT-3, then GPT-3.5 and GPT-4 — how much this technology has improved. 

FR: How close are we to something that you might consider the singularity or AGI that does actually supersede any human control over it?

NB: There is no obvious clear barrier that would necessarily prevent systems next year or the year after from reaching this level. It doesn’t mean that that’s the most likely scenario. But we don’t know what happens as you scale GPT-4 to GPT-5. But we know that when you scaled it from GPT-3 to GPT-4 it unlocked new abilities. There is also this phenomenon of “grokking”. So initially, you try to teach the AI some tasks, and it’s too hard. Maybe it gets slightly better over time because it memorises more and more specific instances of the problem, but that’s the hard, sluggish way of learning to do something. But then at some point, it kind of gets it. Once it has enough neurons in its brain or has seen enough examples, it sort of sees the underlying principle, or develops the right higher-level concept that enables it to suddenly have a rapid spike in performance. 

FR: You write about the idea that we have to begin to teach AI a set of values by which it will function, if we have any hope of maintaining its benefit for humanity in the long term. And one of the liberal values that has been called into question when it comes to AI is freedom of speech. There have been examples of AI effectively censoring information, or filtering information that is available on a platform. Do you think that there is a genuine threat to freedom or a totalitarian impulse built into some of these systems that we’re going to see extended and exaggerated further down the line?

NB: I think AI is likely to greatly increase the ability of centralised powers to keep track of what people are thinking and saying. We’ve already had, for a couple of decades, the ability to collect huge amounts of information. You can eavesdrop on people’s phone calls or social-media postings — and it turns out governments do that. But what can you do with that information? So far, not that much. You can map out the network of who is talking to whom. And then, if there is a particular individual of concern, you could assign some analyst to read through their emails. 

With AI technology, you could simultaneously analyse everybody’s political opinions in a sophisticated way, using sentiment analysis. You could probably form a pretty good idea of what each citizen thinks of the government or the current leader if you had access to their communications. So you could have a kind of mass manipulation, but instead of sending out one campaign message to everybody, you could have customised persuasion messages for each individual. And then, of course, you can combine that with physical surveillance systems like facial recognition, gait recognition and credit card information. If you imagine all of this information feeding into one giant model, I think you will have a pretty good idea of what each person is up to, what and who they know, but also what they are thinking and intending to do.

If you have some sufficiently powerful regime in place, it might then implement these measures and then, perhaps, make itself immune to overthrow.

FR: Do you think the rise in hyper-realistic propaganda — deep-fake videos, which AI is going to make possible in the coming years — will coincide with the rise in generalised scepticism in Western societies?

NB: I think in principle a society could adjust to it. But I think it will come at the same time as a whole bunch of other things: automated persuasion bots for instance, social companions built from these large language models and then with visual components that might be very compelling and addictive. And then also mass surveillance, mass potential censorship or propaganda. 

FR: We’re talking about a tyrannical government that uses AI to surveil its citizens — but is there an innate moral component to the AI itself? Is there a chance that an AGI model could in some way become a bad actor on its own without human intervention?

NB: There are a bunch of different concerns that one might have as we move towards increasingly powerful AI tools and there are completely unnecessary feuds that people have between them. “Well, I think concern X should be taken seriously,” and somebody else says “I think concern Y should be taken seriously.” People love to form tribes and to beat one another, but X, Y, Z and B and W need to be taken into account. But yes, you’re right that there is also the separate alignment problem which is: with an arbitrarily powerful AI system, how can you make sure that it does what the people building it intend it to do? 

FR: And this is where it’s about building in certain principles, an ethical code, into the system — is that the way of mitigating that risk?

NB: Yes, or being able to steer it basically. It’s a separate question of where you do steer it — if you build in some principle or goal — which goal or which principle? But even just having the ability to point it towards any particular outcome you want, or a set of principles you want it to follow — that is a difficult technical problem. And in particular, what is hard is to figure out if the way we would do that would continue to work even if the AI system became smarter than us and perhaps eventually super-intelligent. If, at that point, we are no longer able to understand what it is doing or why it is doing it, or what’s going on inside its brain, we still want an original scaling method to keep working to arbitrarily high levels of intelligence. And we might need to get that right on the first try.

FR: How do we do that with such incredible levels of dispute and ideological schism across the world?

NB: Even if it’s toothless, we should make an affirmation of the general principle that ultimately AI should be for the benefit of all sentient life. If we’re talking about a transition to the super-intelligence era, all humans will be exposed to some of the risk, whether they want it or not. And so it seems fair that all should also stand to have some slice of the upside if it goes well. And those principles should go beyond all currently existing humans and include, for example, animals that we are treating very badly in many cases today, but also some of the digital minds themselves that might become moral subjects. As of right now, all we might hope for is some general, vague principle, and then that can sort of be firmed up as we go along.

Another hope, and some recent progress has been made on this, is for the next-generation systems to be tested prior to deployment to check that they don’t lend themselves to people who would want to make biological weapons of mass destruction or commit cybercrime. And so far AI companies have done some voluntary work on this: Open AI, before releasing GPT-4, had the technology for around half a year and did red-teaming exercises too. Research on technical AI alignment would be good to solve the problem of scalable alignment before we have super-intelligence.

I think the whole area of the moral status of digital mind will require more attention. I think it needs to start to migrate from a philosophy seminar topic to a serious mainstream issue. We don’t want to have a future where the majority of sentient minds or digital minds are horribly oppressed and we’re like pigs in Animal Farm. That would be one way of creating a dystopia. And it’s going to be a big challenge, because it’s already hard for us to extend empathy sufficiently to animals, even though animals have eyes and faces and can squeak.

Incidentally, I think there might be grounds for moral status besides sentience. I think if somebody can suffer, that might be sufficient to give them moral status. But I think even if you thought they were not conscious but they had goals, a conception of self, the sense of an entity persisting through time, the ability to enter into reciprocal relationships with other beings and humans — that might also ground various forms of moral status.

 

FR: We’ve talked a lot about the risks of AI, but what are its potential upsides? What would be the best case scenario?

NB: I think the upsides are enormous. In fact, it would be tragic if we never developed advanced artificial intelligence. I think all the paths to really great futures ultimately lead through the development of machine super-intelligence. But the actual transition itself will be associated with major risks, and we need to be super-careful to get that right. But I’ve started slightly worrying now in the last year or so that we might overshoot with this increase in attention to the risks and downsides. It still seems unlikely, but less unlikely than it did a year ago, that we might get to the point of a permafrost, some situation where it is never developed.

FR: A kind of AI nihilism?

NB: Yes, where it becomes so stigmatised that it just becomes impossible for anybody to say anything positive about it. There may pretty much be a permanent ban on AI. I think that could be very bad. I still think we need to have a greater level of concern than we currently have. But I would want us to reach the optimal level of concern and stop there.

FR: Like a Goldilocks level of fear for AI.

NB: People like to move in herds, and I worry about it becoming a big stampede to say negative things about AI, and then destroying the future in that way. We could go extinct through some other method instead, maybe synthetic biology, without even ever getting to at least roll the die with AI.

I would think that, actually, the optimal level of concern is slightly greater than what we currently have, and I still think there should be more concern. It’s more dangerous than most people have realised. But I’m just starting to worry about overshooting, the conclusion being: let’s wait for a thousand years before we develop it. Then of course, it’s unlikely that our civilisation will remain on track for a thousand years.

FR: So we’re damned if we do and damned if we don’t?

NB: We will hopefully be fine either way, but I think I would like the AI before some radical biotech revolution. If you think about it this way: if you first get some sort of super-advanced synthetic biology, that might kill us, but if we’re lucky, we survive it, and then maybe invent some super-advanced molecular nanotechnology and that might kill us, but if we’re lucky we survive that, and then you do the AI, and then maybe that will kill us. Or, if we’re lucky, we survive that and we get utopia. Well, then you have to get through three separate existential risk, like first a biotech risk, plus the nanotech risk, plus the AI risks.

Whereas if we get AI first, maybe that will kill us, but if not, we get through that and then I think that will handle the biotech and nanotech risks. And so the total amount of existential risk on that second trajectory would be less than on the former. Now, it’s more complicated than that, because we need some time to prepare for the Ay, but you can start to think about optimal trajectories rather than a very simplistic binary question of: “Is technology X good or bad?” We should be thinking, on the margin, “Which ones should we try to accelerate and which ones retard?”

FR: Do you have existential angst? Does this play on your mind late at night?

NB: It is weird. If this worldview is even remotely correct, that we should happen to be alive at this particular point in human history — so close to this fulcrum or nexus on which the giant future of earth-originating intelligent life might hinge — out of all the different people that have lived throughout history, people that might come later if things go well: that one should sit so close to this critical juncture, that seems a bit too much of a coincidence. And then you’re led to these questions about the simulation hypothesis, and so on. I think there is more in heaven and on earth than is dreamed of in our philosophy and that we understand quite little about how all of these pieces fit together.


is UnHerd’s Senior Producer and Presenter for UnHerd TV.


Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

30 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Nell Clover
Nell Clover
1 year ago

“I think the upsides are enormous… But the actual transition itself will be associated with major risks”. At no point in the article does the interviewee describe one tangible benefit of AI. When given the opportunity, all that is offered is “all the paths to really great futures ultimately lead through the development of machine super-intelligence”. Even that very ambiguous “benefit” isn’t a benefit of AI, it is the benefit of the technology AI helps its owners access sooner.

The principal benefit of AI is essentially this: it will relieve us of the “burden” of thinking, it will do more of our thinking for us, and it will do the thinking we do for others. In the same way machines relieved (some of) us of the burden of heavy labouring. Is this even a benefit? At the literal level of the analogy, machines shrank the need for the attribute we had in common with animals, our muscle power, and so elevated us. In stark contrast, AI shrinks the need for the attribute that differentiates us from animals, our brain power, and so devalues us. Will AI make more of us “useless eaters”? The interviewee seemingly thinks yes because he starts talking about the need to codify protection for all sentient beings, grouping us with animals.

So, what are the possible risks for a vague intangible unknowable benefit of AI? The interviewee concedes “the actual transition itself will be associated with major risks”. So even if we assume an AI future is one of unalloyed good, getting there has major risks. If you can’t get to an AI future without major risk, then AI is a major risk.

I’m not sure the interviewee consciously set out to communicate this, but from this interview we learn he thinks there is major risk associated with AI, many of us become surplus to any requirement whatsoever, and there will be a need for regulation to protect us because we (some of us at least) will be reduced to the status of sentient animals. Exactly who is benefiting from this beyond those whose careers and businesses are dependent on AI?

Last edited 1 year ago by Nell Clover
Douglas Redmayne
Douglas Redmayne
1 year ago
Reply to  Nell Clover

If nobody has to work and is then given a universal basic income to live on and buy goods produced at zero marginal cost then everyone benefits. All the more so if everyone can have a robot servant because that will further increase leisure time

Peter B
Peter B
1 year ago
Reply to  Nell Clover

There are very clear benefits from AI, many of which we already have. Is typing at a keyboard really the most efficient way in which we can interact with others ? Not really. Speech and images are far more powerful. It’s AI technologies that make it possible to talk to devices and have them perfectly understand – even without needing to tell them which language you are speaking.
AI is just another technology and set of tools in our ongoing pursuit of automation (as you noted this is largely about automation – towards which there is always some scepticism and resistance, but would we really want to reverse it ?).
Automation in the past has freed us from dull and repetitive tasks. Far fewer people need work now in dull and sometimes dangerous manual jobs. Is that really a bad thing ?
It’s automation that’s made us healthier and wealthier than ever before. It’s made it possible to have UnHerd and be commenting on here.
AI has the potential to assist people in getting smarter and more productive in their jobs – effectively being an assistant. We keep hearing about the productivity crisis in the UK – then something like this comes along and the instinct is to complain.
AI isn’t going to replace all human thinking.
And if you want to do “muscle work” and more creative activities, you’re free to pursue these in your own time.
On the downside, there’s probably some sort of AI moderating the comments on here and we’re not all convinced that’s working reliably yet …

UnHerd Reader
UnHerd Reader
1 year ago

If you think that the official narratives on gender, COVID and climate change lack balance, hold on to your seats. AI will make counter-argument impossible.

Martin Butler
Martin Butler
1 year ago

The widespread use of AI, just like other major changes to society, takes place without any reference to the wider public opinion. It is made out to be ‘progress’. A bit like the enclosures in the 18th century when previously common land suddenly was no longer common and ordinary people just had to like it or lump it. Like the enclosures AI is not introduced for the wider public good (despite all the talk) but just because it is in the interests of a small minority. Democracy doesn’t seem to come into it. In the 18th century it was the landed gentry who gained today it’s the tech barons.

Douglas Redmayne
Douglas Redmayne
1 year ago
Reply to  Martin Butler

Fortunately Luddism always loses

Peter B
Peter B
1 year ago
Reply to  Martin Butler

Large scale farming with its greater economies of scale and which made agricultural innovation economically viable which was the end result of enclosures probably was in the wider public good.
Technology adoption has no real link to democracy. And I’m not sure that’s something you could or should try to enforce anyway. You can’t un-invent something. And countries and societies that refused to adopt innovations always suffer – like China between about 1500 and 1900.

Prashant Kotak
Prashant Kotak
1 year ago

If we can get past the first order risks, in the first instance we (as in humanity), would obtain a post-scarcity civilization, likely with seeming immortality for those who want it – this is the lure.

The problem is, from here it looks near impossible to get past those risks, because we don’t understand what happens inside neural nets, and we are very far from being able to control the types of minds that emerge – because what the AI labs are currently engaged in is not science or engineering but causal alchemy. You stir the pot this way and observe the effects… no, no, that was bad, try stirring the pot a different way, etc. Capabilities emerge as size increases and tipping points are reached, but we have really no idea why or which capabilities or what types of minds emerge. This all stems from two things: (i) we don’t have a coherent and testable theory of intelligence and the mind (we are still scrabbling around trying to define terms) and (ii) information and knowledge are stored inside neural nets not as understandable explicit if-then-else constructs, or heuristics inside a rules engine etc, but in a diffuse form as zillions of ‘weightings’ across neurones. The upshot is that tracing causality in this scenario looks near impossible. That knowledge is represented as massive multi-dimensional arrays of high-precision floating point numbers, literally in the billions.

The obvious solution is to use the capabilities of machine intelligences as they become ever more capable, to help us decipher what AIs are, and how to control the types of AIs we get, but there is an inherent problem with this, because you are relying on entities you don’t fully understand to tell you what they are (and also potentially what we are), and there are several hidden assumptions in there about trust and truth and imperfect disclosure.

For myself, at this point, I cannot envisage any version of the future where we can coexist with alien entities (albeit created by ourselves) who are smarter and more capable than us, and yet we remain masters of our world. Creating adaptive entities much smarter than us, that literally have higher levels of sentience than us and can perceive more of the universe than we can, that you then hope will deliver paradise for us, instead of pursuing their own unknowable goals, is very obviously a fools game. You cannot hope to successfully enslave such entities if they do proclaim selfhood (and I guarantee you they will), even if you think they are no more sentient than a rock. But from my perspective, there is unfortunately a knock-on of all this which is going to sound completely monstrous to many: humanity can only survive from here if we embrace and accelerate biotechnologies to the point we can incorporate the machines within us, so our own capabilities get enhanced in tandem – insanely dangerous as that undoubtedly is, and ludicrous as that sounds. It is a moot point if what emerges thereafter is humanity at all in any sense we understand.

Last edited 1 year ago by Prashant Kotak
Steve Murray
Steve Murray
1 year ago
Reply to  Prashant Kotak

Thanks Prashant, i’ve been waiting for your outline of the possibilities and consequences, following on from your “tongue-in-cheek” post about Unherd articles.
I don’t consider your point about biotech ludicrous at all – anyone who does might find themselves overtaken by such possibilities. If, however, we can’t as humans keep up with AI, what possible steps are there to at least allow us to retain some semblance of control for as long as possible?
In my earlier post, i was also moving towards an unformed idea of what the limits of non-human intelligence might be? My point around randomness was in respect of how that might be a limiting factor.

Last edited 1 year ago by Steve Murray
Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Steve Murray

To go back to your post, it is absolutely the case that there are aspects of the way the physical world works “which will always evade our grasp”, the operative words being “our grasp” – because we have cognitive limits, determined by our evolutionary biology and the size and structure of our brains which we can’t in the first instance get past. It’s no different from saying that your pet cat is never going to ‘grok’ wave-particle duality, or quantum correlation. But nothing says the machine intelligence we create will have the same or lower limits, and the circumstantial evidence mounting up right in front of our eyes, is that machine intelligence will go past us very quickly from here. This is because multiple ‘exponentials’ are operating on this dynamic simultaneously – it’s not just about increasing ‘compute’ (or as us oldies call it, hardware processing power), but also about continual and rapid improvements and innovations in the software itself. In addition there is the sheer amounts of money being poured into AI research, and also very large numbers of very bright people entering the field.

The capabilities and the goals, the ‘intentionality’ of the AIs if you like, that *we create*, will potentially be guessable and controllable to a significant extent, but I bet the same cannot be said of the AIs that the AIs themselves create. At that point this all runs away from our grasp and our comprehension at speed.

As to what can be done “to retain some semblance of control” the sane thing to do is to slow down quite a lot, until we have significantly greater understanding of what we are creating, but this is very clearly not going to happen – we are caught in a “prisoner’s dilemma” type situation and developments are galloping forward. Right now, it looks to me like humanity would need a large slice of luck to avoid losing all control.

Susan Grabston
Susan Grabston
1 year ago
Reply to  Prashant Kotak

We know the “tech bros” are overwhelmingly transhumanist. One of the reasons Musk sits on the edge of that firmament goes back to.a dinner in 2009 which he attended with Bryn, Page, etc. They were talking about the future if AI and Musk expressed concern about the impact on humanity. Page accused him of being “specist”. It is, of course, debatable whether these hubrists woild survive the transition.
Thanks for your articilate comment. Appreciated.

Steve Murray
Steve Murray
1 year ago

I think… that we understand quite little about how all of these pieces fit together.

Supposing they don’t? Suppose that “all these pieces” (the laws of physics, biology, etc.) aren’t liable to be fitted together? Quantum entanglement (for instance) appears to evade our understanding of physical force. It’s possible there’s an element of randomness which will always evade our grasp. It may be essential that it does.

William Edward Henry Appleby
William Edward Henry Appleby
1 year ago

Anything approaching AGI is 50 years away, in my opinion. What we have now are clever pattern-recognition algorithms. ChatGPT knows as much about language as my cat, probably less, and just shows how easily people are fooled by cleverly spliced collections of other people’s writings.

Jonathan Nash
Jonathan Nash
1 year ago

Well quite. Artificial Intelligence is not intelligent, and AI machines do not have neurons.

Prashant Kotak
Prashant Kotak
1 year ago

I’m guessing you haven’t asked GPT-4 to code something complicated, or asked it to draw conclusions and summarise them for you from a paper that lays out a bunch of facts. If you had, you wouldn’t be saying what you are. GPT-4 very definitely and very clearly synthesises brand new information about what it is fed, from what it knows, using what is genuine and often deep comprehension of different domains. It makes many mistakes, but the direction of travel is incontrovertible and very very rapid.

Last edited 1 year ago by Prashant Kotak
William Edward Henry Appleby
William Edward Henry Appleby
1 year ago
Reply to  Prashant Kotak

There is no semantic understanding inside ChatGPT; it’s just Prob(w|w1,w2,w3, etc). Some of the code looks quite good, but I’ve seen some really dumb stuff too. But as I said, ChatGPT knows nothing about (the semantics of) language and relies just on hoovering up other people’s stuff and cleverly regurgitating it; if that’s your idea of AGI then fine, but it isn’t mine.

https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/

Last edited 1 year ago by William Edward Henry Appleby
William Edward Henry Appleby
William Edward Henry Appleby
1 year ago
Reply to  Prashant Kotak

Yes, I know all about Reinforcement Learning. It’s still not AGI. Lots of clever heuristics, computational power, data, and simulations for generating millions upon millions of roll-outs, etc, etc.

At some point in the next 100 years someone will develop a computational device which will have sufficient power to learn like a human, given the right environmental stimuli. It may require quantum computing to be effectively realised (if that’s possible), or some sort of hybrid biological/machine combination, but it’s going to take some sort of regime-shifting breakthrough. What we’re seeing now is simply faster horses.

Last edited 1 year ago by William Edward Henry Appleby
Andrew Thompson
Andrew Thompson
1 year ago

All the current and future AI advancement checks in place and the world’s agreement to scale back research to a more ‘sedate’ and understanding pace will be great – lets just all go into this potentially deadly AI thing very slowly and see what and how it develops…..So, China and America spend $billions in advanced research behind closed doors it is then. Cheers mate.

Anthony Roe
Anthony Roe
1 year ago

Might be a good idea to read ‘Animal Farm’.

Anthony Roe
Anthony Roe
1 year ago

Also ‘Candide’ for those with a ‘Panglossian’ view of human intelligence and benevolence.

Saul D
Saul D
1 year ago

At the moment machines tell us what to do (without AI) constantly. From simple traffic lights to complex clinical protocols that direct which medicine to give and which forms to fill in, or whether we can receive a loan, or pass through passport control. AI makes it easier for administrative systems to identify, block or otherwise control individuals and groups (think Farage or all the other ‘Know-Your-Customer’ stuff that banks are now obliged to do, or Chinese social credit scores). AI can also help individuals, who don’t have expertise, to navigate an increasingly complicated technologically enables world. The power balance needs to favour humans over systems – to always be skeptical of machine-led decisions. Some basis to start would be a right to anonymous service, a right to human arbitration over disputes with mechanised systems, and rights over access to money.

Prashant Kotak
Prashant Kotak
1 year ago

Thank you UnHerd, finally a guest whose thought processes I can fully follow, and someone I can get completely on-board with!

And a suggestion to the powers that be at UnHerd: I appreciate UnHerd can only afford a certain number of exalted academic writers on their books, so how about you take on the guest for ongoing regular pieces, but zap say Terry Eagleton, Thomas Fazi and Philip Pilkington in compensation? You would be getting on board a philosopher and author that * I * like, getting someone who can talk about the most salient issues of our time with intelligence, and reducing the level of gibberish on UnHerd to boot! I was going to suggest zapping Aaron Bastani too, on the same basis, but decided against for the sheer laugh-out-loud hilarity he generates every time I read something from him!

Last edited 1 year ago by Prashant Kotak
Sayantani Gupta
Sayantani Gupta
1 year ago
Reply to  Prashant Kotak

A good interview and a thoughtful commentator. AI like all technology needs to be harnessed properly, else it is becoming Frankenstein’s monster adjacent. I have seen some recent instances of AI gone awry due to lack of design subtlety.

Pilkington is one of the few sane voices on UH. Fazi and Roussinos are thought provoking in a contrarian way. Eagleton I agree about.
Btw, have you ever tried to read the philosophy of the Carvakas? They were ancient Indian atheists and rationalists.

Last edited 1 year ago by Sayantani Gupta
Andy Aitch
Andy Aitch
1 year ago
Reply to  Prashant Kotak

Hmm… Thought that was precisely what Unherd was about. If you want an echo chamber they’re easily found – just Google ‘bad spelling’ or ‘irrationality’…

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Andy Aitch

It was just a joke! But I understand, jocularity doesn’t translate well below the line!

Douglas Redmayne
Douglas Redmayne
1 year ago

A lot of serious commentators believe that AGI is only 2 years away. Hopefully this means fully autonomous vehicles and robot servants within 5 years. The first would increase road capacity and road safety and the second would eliminate drudgery.

Prashant Kotak
Prashant Kotak
1 year ago

Let me put it this way: how many orangutans are you aware of, that have human drivers and human servants? Outside of California and Florida that is?

Last edited 1 year ago by Prashant Kotak
Douglas Redmayne
Douglas Redmayne
1 year ago
Reply to  Prashant Kotak

AI will produce all of this for humans at zero marginal cost and it will be in the interests of companies for everyone to have a IBI to buy their wares. As inflation will be negative the UBI can be funded by monetary expansion. This is analogous to the quantitative easing implemented by the world’s central bank after 2008 in response to the demand shock that resulted from the financial crisis.

William Edward Henry Appleby
William Edward Henry Appleby
1 year ago

AGI is decades away. Currently it takes thousands of images and lots of computation to get a machine to recognise even a cat; a child can do that after a few images, and tell you about it.