Subscribe
Notify of
guest

30 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Nell Clover
Nell Clover
6 months ago

“I think the upsides are enormous… But the actual transition itself will be associated with major risks”. At no point in the article does the interviewee describe one tangible benefit of AI. When given the opportunity, all that is offered is “all the paths to really great futures ultimately lead through the development of machine super-intelligence”. Even that very ambiguous “benefit” isn’t a benefit of AI, it is the benefit of the technology AI helps its owners access sooner.

The principal benefit of AI is essentially this: it will relieve us of the “burden” of thinking, it will do more of our thinking for us, and it will do the thinking we do for others. In the same way machines relieved (some of) us of the burden of heavy labouring. Is this even a benefit? At the literal level of the analogy, machines shrank the need for the attribute we had in common with animals, our muscle power, and so elevated us. In stark contrast, AI shrinks the need for the attribute that differentiates us from animals, our brain power, and so devalues us. Will AI make more of us “useless eaters”? The interviewee seemingly thinks yes because he starts talking about the need to codify protection for all sentient beings, grouping us with animals.

So, what are the possible risks for a vague intangible unknowable benefit of AI? The interviewee concedes “the actual transition itself will be associated with major risks”. So even if we assume an AI future is one of unalloyed good, getting there has major risks. If you can’t get to an AI future without major risk, then AI is a major risk.

I’m not sure the interviewee consciously set out to communicate this, but from this interview we learn he thinks there is major risk associated with AI, many of us become surplus to any requirement whatsoever, and there will be a need for regulation to protect us because we (some of us at least) will be reduced to the status of sentient animals. Exactly who is benefiting from this beyond those whose careers and businesses are dependent on AI?

Last edited 6 months ago by Nell Clover
Douglas Redmayne
Douglas Redmayne
6 months ago
Reply to  Nell Clover

If nobody has to work and is then given a universal basic income to live on and buy goods produced at zero marginal cost then everyone benefits. All the more so if everyone can have a robot servant because that will further increase leisure time

Peter B
Peter B
6 months ago
Reply to  Nell Clover

There are very clear benefits from AI, many of which we already have. Is typing at a keyboard really the most efficient way in which we can interact with others ? Not really. Speech and images are far more powerful. It’s AI technologies that make it possible to talk to devices and have them perfectly understand – even without needing to tell them which language you are speaking.
AI is just another technology and set of tools in our ongoing pursuit of automation (as you noted this is largely about automation – towards which there is always some scepticism and resistance, but would we really want to reverse it ?).
Automation in the past has freed us from dull and repetitive tasks. Far fewer people need work now in dull and sometimes dangerous manual jobs. Is that really a bad thing ?
It’s automation that’s made us healthier and wealthier than ever before. It’s made it possible to have UnHerd and be commenting on here.
AI has the potential to assist people in getting smarter and more productive in their jobs – effectively being an assistant. We keep hearing about the productivity crisis in the UK – then something like this comes along and the instinct is to complain.
AI isn’t going to replace all human thinking.
And if you want to do “muscle work” and more creative activities, you’re free to pursue these in your own time.
On the downside, there’s probably some sort of AI moderating the comments on here and we’re not all convinced that’s working reliably yet …

UnHerd Reader
UnHerd Reader
6 months ago

If you think that the official narratives on gender, COVID and climate change lack balance, hold on to your seats. AI will make counter-argument impossible.

Martin Butler
Martin Butler
6 months ago

The widespread use of AI, just like other major changes to society, takes place without any reference to the wider public opinion. It is made out to be ‘progress’. A bit like the enclosures in the 18th century when previously common land suddenly was no longer common and ordinary people just had to like it or lump it. Like the enclosures AI is not introduced for the wider public good (despite all the talk) but just because it is in the interests of a small minority. Democracy doesn’t seem to come into it. In the 18th century it was the landed gentry who gained today it’s the tech barons.

Douglas Redmayne
Douglas Redmayne
6 months ago
Reply to  Martin Butler

Fortunately Luddism always loses

Peter B
Peter B
6 months ago
Reply to  Martin Butler

Large scale farming with its greater economies of scale and which made agricultural innovation economically viable which was the end result of enclosures probably was in the wider public good.
Technology adoption has no real link to democracy. And I’m not sure that’s something you could or should try to enforce anyway. You can’t un-invent something. And countries and societies that refused to adopt innovations always suffer – like China between about 1500 and 1900.

Prashant Kotak
Prashant Kotak
6 months ago

If we can get past the first order risks, in the first instance we (as in humanity), would obtain a post-scarcity civilization, likely with seeming immortality for those who want it – this is the lure.

The problem is, from here it looks near impossible to get past those risks, because we don’t understand what happens inside neural nets, and we are very far from being able to control the types of minds that emerge – because what the AI labs are currently engaged in is not science or engineering but causal alchemy. You stir the pot this way and observe the effects… no, no, that was bad, try stirring the pot a different way, etc. Capabilities emerge as size increases and tipping points are reached, but we have really no idea why or which capabilities or what types of minds emerge. This all stems from two things: (i) we don’t have a coherent and testable theory of intelligence and the mind (we are still scrabbling around trying to define terms) and (ii) information and knowledge are stored inside neural nets not as understandable explicit if-then-else constructs, or heuristics inside a rules engine etc, but in a diffuse form as zillions of ‘weightings’ across neurones. The upshot is that tracing causality in this scenario looks near impossible. That knowledge is represented as massive multi-dimensional arrays of high-precision floating point numbers, literally in the billions.

The obvious solution is to use the capabilities of machine intelligences as they become ever more capable, to help us decipher what AIs are, and how to control the types of AIs we get, but there is an inherent problem with this, because you are relying on entities you don’t fully understand to tell you what they are (and also potentially what we are), and there are several hidden assumptions in there about trust and truth and imperfect disclosure.

For myself, at this point, I cannot envisage any version of the future where we can coexist with alien entities (albeit created by ourselves) who are smarter and more capable than us, and yet we remain masters of our world. Creating adaptive entities much smarter than us, that literally have higher levels of sentience than us and can perceive more of the universe than we can, that you then hope will deliver paradise for us, instead of pursuing their own unknowable goals, is very obviously a fools game. You cannot hope to successfully enslave such entities if they do proclaim selfhood (and I guarantee you they will), even if you think they are no more sentient than a rock. But from my perspective, there is unfortunately a knock-on of all this which is going to sound completely monstrous to many: humanity can only survive from here if we embrace and accelerate biotechnologies to the point we can incorporate the machines within us, so our own capabilities get enhanced in tandem – insanely dangerous as that undoubtedly is, and ludicrous as that sounds. It is a moot point if what emerges thereafter is humanity at all in any sense we understand.

Last edited 6 months ago by Prashant Kotak
Steve Murray
Steve Murray
6 months ago
Reply to  Prashant Kotak

Thanks Prashant, i’ve been waiting for your outline of the possibilities and consequences, following on from your “tongue-in-cheek” post about Unherd articles.
I don’t consider your point about biotech ludicrous at all – anyone who does might find themselves overtaken by such possibilities. If, however, we can’t as humans keep up with AI, what possible steps are there to at least allow us to retain some semblance of control for as long as possible?
In my earlier post, i was also moving towards an unformed idea of what the limits of non-human intelligence might be? My point around randomness was in respect of how that might be a limiting factor.

Last edited 6 months ago by Steve Murray
Prashant Kotak
Prashant Kotak
6 months ago
Reply to  Steve Murray

To go back to your post, it is absolutely the case that there are aspects of the way the physical world works “which will always evade our grasp”, the operative words being “our grasp” – because we have cognitive limits, determined by our evolutionary biology and the size and structure of our brains which we can’t in the first instance get past. It’s no different from saying that your pet cat is never going to ‘grok’ wave-particle duality, or quantum correlation. But nothing says the machine intelligence we create will have the same or lower limits, and the circumstantial evidence mounting up right in front of our eyes, is that machine intelligence will go past us very quickly from here. This is because multiple ‘exponentials’ are operating on this dynamic simultaneously – it’s not just about increasing ‘compute’ (or as us oldies call it, hardware processing power), but also about continual and rapid improvements and innovations in the software itself. In addition there is the sheer amounts of money being poured into AI research, and also very large numbers of very bright people entering the field.

The capabilities and the goals, the ‘intentionality’ of the AIs if you like, that *we create*, will potentially be guessable and controllable to a significant extent, but I bet the same cannot be said of the AIs that the AIs themselves create. At that point this all runs away from our grasp and our comprehension at speed.

As to what can be done “to retain some semblance of control” the sane thing to do is to slow down quite a lot, until we have significantly greater understanding of what we are creating, but this is very clearly not going to happen – we are caught in a “prisoner’s dilemma” type situation and developments are galloping forward. Right now, it looks to me like humanity would need a large slice of luck to avoid losing all control.

Susan Grabston
Susan Grabston
6 months ago
Reply to  Prashant Kotak

We know the “tech bros” are overwhelmingly transhumanist. One of the reasons Musk sits on the edge of that firmament goes back to.a dinner in 2009 which he attended with Bryn, Page, etc. They were talking about the future if AI and Musk expressed concern about the impact on humanity. Page accused him of being “specist”. It is, of course, debatable whether these hubrists woild survive the transition.
Thanks for your articilate comment. Appreciated.

Steve Murray
Steve Murray
6 months ago

I think… that we understand quite little about how all of these pieces fit together.

Supposing they don’t? Suppose that “all these pieces” (the laws of physics, biology, etc.) aren’t liable to be fitted together? Quantum entanglement (for instance) appears to evade our understanding of physical force. It’s possible there’s an element of randomness which will always evade our grasp. It may be essential that it does.

William Edward Henry Appleby
William Edward Henry Appleby
6 months ago

Anything approaching AGI is 50 years away, in my opinion. What we have now are clever pattern-recognition algorithms. ChatGPT knows as much about language as my cat, probably less, and just shows how easily people are fooled by cleverly spliced collections of other people’s writings.

Jonathan Nash
Jonathan Nash
6 months ago

Well quite. Artificial Intelligence is not intelligent, and AI machines do not have neurons.

Prashant Kotak
Prashant Kotak
6 months ago

I’m guessing you haven’t asked GPT-4 to code something complicated, or asked it to draw conclusions and summarise them for you from a paper that lays out a bunch of facts. If you had, you wouldn’t be saying what you are. GPT-4 very definitely and very clearly synthesises brand new information about what it is fed, from what it knows, using what is genuine and often deep comprehension of different domains. It makes many mistakes, but the direction of travel is incontrovertible and very very rapid.

Last edited 6 months ago by Prashant Kotak
William Edward Henry Appleby
William Edward Henry Appleby
6 months ago
Reply to  Prashant Kotak

There is no semantic understanding inside ChatGPT; it’s just Prob(w|w1,w2,w3, etc). Some of the code looks quite good, but I’ve seen some really dumb stuff too. But as I said, ChatGPT knows nothing about (the semantics of) language and relies just on hoovering up other people’s stuff and cleverly regurgitating it; if that’s your idea of AGI then fine, but it isn’t mine.

https://www.theregister.com/2023/08/07/chatgpt_stack_overflow_ai/

Last edited 6 months ago by William Edward Henry Appleby
Prashant Kotak
Prashant Kotak
6 months ago
William Edward Henry Appleby
William Edward Henry Appleby
6 months ago
Reply to  Prashant Kotak

Yes, I know all about Reinforcement Learning. It’s still not AGI. Lots of clever heuristics, computational power, data, and simulations for generating millions upon millions of roll-outs, etc, etc.

At some point in the next 100 years someone will develop a computational device which will have sufficient power to learn like a human, given the right environmental stimuli. It may require quantum computing to be effectively realised (if that’s possible), or some sort of hybrid biological/machine combination, but it’s going to take some sort of regime-shifting breakthrough. What we’re seeing now is simply faster horses.

Last edited 6 months ago by William Edward Henry Appleby
Andrew Thompson
Andrew Thompson
6 months ago

All the current and future AI advancement checks in place and the world’s agreement to scale back research to a more ‘sedate’ and understanding pace will be great – lets just all go into this potentially deadly AI thing very slowly and see what and how it develops…..So, China and America spend $billions in advanced research behind closed doors it is then. Cheers mate.

Anthony Roe
Anthony Roe
6 months ago

Might be a good idea to read ‘Animal Farm’.

Anthony Roe
Anthony Roe
6 months ago

Also ‘Candide’ for those with a ‘Panglossian’ view of human intelligence and benevolence.

Saul D
Saul D
6 months ago

At the moment machines tell us what to do (without AI) constantly. From simple traffic lights to complex clinical protocols that direct which medicine to give and which forms to fill in, or whether we can receive a loan, or pass through passport control. AI makes it easier for administrative systems to identify, block or otherwise control individuals and groups (think Farage or all the other ‘Know-Your-Customer’ stuff that banks are now obliged to do, or Chinese social credit scores). AI can also help individuals, who don’t have expertise, to navigate an increasingly complicated technologically enables world. The power balance needs to favour humans over systems – to always be skeptical of machine-led decisions. Some basis to start would be a right to anonymous service, a right to human arbitration over disputes with mechanised systems, and rights over access to money.

Prashant Kotak
Prashant Kotak
6 months ago

Thank you UnHerd, finally a guest whose thought processes I can fully follow, and someone I can get completely on-board with!

And a suggestion to the powers that be at UnHerd: I appreciate UnHerd can only afford a certain number of exalted academic writers on their books, so how about you take on the guest for ongoing regular pieces, but zap say Terry Eagleton, Thomas Fazi and Philip Pilkington in compensation? You would be getting on board a philosopher and author that * I * like, getting someone who can talk about the most salient issues of our time with intelligence, and reducing the level of gibberish on UnHerd to boot! I was going to suggest zapping Aaron Bastani too, on the same basis, but decided against for the sheer laugh-out-loud hilarity he generates every time I read something from him!

Last edited 6 months ago by Prashant Kotak
Sayantani Gupta
Sayantani Gupta
6 months ago
Reply to  Prashant Kotak

A good interview and a thoughtful commentator. AI like all technology needs to be harnessed properly, else it is becoming Frankenstein’s monster adjacent. I have seen some recent instances of AI gone awry due to lack of design subtlety.

Pilkington is one of the few sane voices on UH. Fazi and Roussinos are thought provoking in a contrarian way. Eagleton I agree about.
Btw, have you ever tried to read the philosophy of the Carvakas? They were ancient Indian atheists and rationalists.

Last edited 6 months ago by Sayantani Gupta
Andy Aitch
Andy Aitch
6 months ago
Reply to  Prashant Kotak

Hmm… Thought that was precisely what Unherd was about. If you want an echo chamber they’re easily found – just Google ‘bad spelling’ or ‘irrationality’…

Prashant Kotak
Prashant Kotak
6 months ago
Reply to  Andy Aitch

It was just a joke! But I understand, jocularity doesn’t translate well below the line!

Douglas Redmayne
Douglas Redmayne
6 months ago

A lot of serious commentators believe that AGI is only 2 years away. Hopefully this means fully autonomous vehicles and robot servants within 5 years. The first would increase road capacity and road safety and the second would eliminate drudgery.

Prashant Kotak
Prashant Kotak
6 months ago

Let me put it this way: how many orangutans are you aware of, that have human drivers and human servants? Outside of California and Florida that is?

Last edited 6 months ago by Prashant Kotak
Douglas Redmayne
Douglas Redmayne
6 months ago
Reply to  Prashant Kotak

AI will produce all of this for humans at zero marginal cost and it will be in the interests of companies for everyone to have a IBI to buy their wares. As inflation will be negative the UBI can be funded by monetary expansion. This is analogous to the quantitative easing implemented by the world’s central bank after 2008 in response to the demand shock that resulted from the financial crisis.

William Edward Henry Appleby
William Edward Henry Appleby
6 months ago

AGI is decades away. Currently it takes thousands of images and lots of computation to get a machine to recognise even a cat; a child can do that after a few images, and tell you about it.