X Close

Robots are teaching us how humans think

Credit: Getty

September 30, 2021 - 2:37pm

About 18 months ago, I went to DeepMind’s offices in King’s Cross to meet the head of their robotics lab, Raia Hadsell. I have been obsessed with DeepMind for years, so I was pretty excited. The piece I wrote about it has just been published; it was delayed because of this nasty bug that’s been going around. I wanted to talk about it because DeepMind’s work says quite profound things, I think, about how human learning works.

There’s been incredible progress in AI over recent years, but robotics has been slower. Partly that’s because you can train an AI on billions of pictures scraped off the internet, but if you’re training a robot to pick up a cup, you can’t make it do that billions of times, because that would take centuries. Training AIs takes lots of data, and that data is harder to come by when you’re getting it from real-world actions that take real-time seconds.

But there’s a deeper problem, which I found fascinating. Most modern AIs — whether robots or face-recognition software or whatever — work on neural networks. To oversimplify: a neural net consists of a load of interconnected nodes, a bit like our brain’s neurons. Each node will “fire”, send a signal to the next set of nodes, if it receives a strong enough signal from the nodes below it.

Say you want to train an AI to recognise images of cats and dogs. You show it millions of pictures, it tries to classify each one as “cat” or “dog”, and you tell it if it gets each one right or wrong. Each right or wrong answer trains the AI: it will change the strength (“weights”) of the connections between the nodes. Eventually it will get the weights near-perfectly set up, and will be brilliant at identifying dogs.

But then you try to train it to recognise buses and cars. You give it millions of pics of buses and cars, it reweighs its connections, and becomes brilliant at recognising buses. But now you show it a picture of a cat, and all its connections are changed, and it’s useless. Hadsell would train an AI to play Pong, winning every game 20-love, but if she trained it then to play Breakout, it would forget Pong, and lose 20-love every time.

This is called catastrophic forgetting. And a lot of work in modern AI, especially in robotics, because most robots need to be able to do more than one thing to survive in a complex environment, is about finding ways around it. The methods are complex — I go into them in some detail in the piece itself if you want to learn more — but often they involve partially freezing some of the connections that are most important for a given task.

What that means, though, is that as the AI learns more tasks, it has fewer and fewer unfrozen connections. It becomes more competent and skilful, but less able to learn. As a “young” AI, it is incredibly flexible but kind of useless; as it gets “older”, it gains more skills but finds it harder to make new ones.

Which is very familiar. Our own brains have lots and lots of dense, weak connections when we’re young, and those connections are pruned over time as we learn, becoming stronger but sparser. It means we are fantastic at learning as children — see how we pick up languages. But as we get the skills, we find it harder to pick up new ones. Hadsell even speculated that this was why we don’t remember things from our early years: our brains’ connections are so weak that they can’t form episodic memories: “Everything is being catastrophically forgotten all the time, because everything is connected and nothing is protected.”

Neural networks are explicitly modelled on our own brains. From the start, AI (and DeepMind in particular) has learnt from neuroscience. But there’s also a lot of cross-pollination: discoveries in AI tell us things about how thought and rationality work, and may tell us a lot about our brains in particular. The development of AI over the next 20 years will be the most fascinating area of science.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

16 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Prashant Kotak
Prashant Kotak
2 years ago

Very nice and more of this please UnHerd, thank you. Will post debate on this later.

Matt M
Matt M
2 years ago
Reply to  Prashant Kotak

Agreed! Really good article.

Mangle Tangle
Mangle Tangle
2 years ago

Great article, but the analogy between an AI finding it harder to learn things because, as it develops, its connections get increasingly ‘frozen’ and why humans find it harder to learn new things as we age seems a little artificial to me. For one thing, the analogy assumes that the human and AI model IS the same, when I suspect it’s fundamentally different. Also, there are many more degrees of freedom in human life; we aren’t constrained in the way that an AI is and, consequently, there are many more reasons for changes in our system as we age,

chris sullivan
chris sullivan
2 years ago
Reply to  Mangle Tangle

I dont have any problems learning new things if i am interested – but its the new things I learned a while ago that might be a bit fuzzy 🙂

Mark Vernon
Mark Vernon
2 years ago
Reply to  Mangle Tangle

Agreed. It also overlooks all sorts of factors key in human learning/memory that are absent in AIs, from emotion to bodies to the wider culture, to say nothing of comprehension, imagination and intuition…

Niobe Hunter
Niobe Hunter
2 years ago

Catastrophic forgetting…..that explains a lot about…..damn, what was I going to say? On well…..

Prashant Kotak
Prashant Kotak
2 years ago

‌The machines are also teaching us what to do. I qualify that: (i) for the moment, and linked to this is (ii) only at the top end of expertise. Juxtaposing human players with the chess engines illustrates. The world champion, Carlsen, is rated around 2850. The top chess engines, Stockfish (which is not a neural net), AlphaZero and Leela (which are), are rated around 3600-3700. Informally Carlsen has played many games against the top engines – he has never won (no human player does now) unless he is given heavy odds (and more often than not top human players will *still* lose). Anecdotally Carlsen though, does better against the engines than any other player by a long way. The machines playing each other has revealed many strategies which Carlsen has taken on board and he uses them frequently against human opponents – for example grabbing space on the wings by advancing pawns. The engines also prioritise mobility over material and prioritise moves which constrict the movement of opposing pieces, even at the cost of material. Neural net moves certainly feel less ‘machine’ but it cannot be said any of the engines play in a ‘human’ way – the gulf is already too big. The very top players often don’t know why the engines make the moves they do – until deep analysis later on. There are many other ways the engines have altered how the human chess ecosystem works. If anyone watched ‘The Queen’s Gambit’ they will know about ‘adjournments’, but this is no longer possible, a game would be completely analysed out by engines on both sides, neither side need bother turning up to resume the game. One more thing: the engines are teaching humans about chess, but only the very top players can benefit. It’s like taking a driving lesson with Lewis Hamilton – you would only get something out of it if you are nearly at the same level, otherwise it’s like feeding strawberries to pigs.

Last edited 2 years ago by Prashant Kotak
William MacDougall
William MacDougall
2 years ago

The discussion about consciousness in your original article is one of many things that make wonder if this is really how people think, rather than just an effective way of teaching computers chess and go…

Diana Durham
Diana Durham
2 years ago

This is simplistic. It overlooks the extraordinary vast and complex model of the world that the human brain, well, actually it’s a person, creates over his or her lifetime. Something occasionally associated with wisdom.

Chris Eaton
Chris Eaton
2 years ago

“Neural networks are explicitly modelled on our own brains.” This statement reveals the fundamental flaws of this article…for it ASSUMES that we know how the human brain actually works. Here’s an update: WE DON’T. And, besides, can a human being create an AI that tells us something about ourselves that we already, in our brains, understand already? AI is not God.

Norman Powers
Norman Powers
2 years ago
Reply to  Chris Eaton

That seems a bit over the top. We know a lot about the structure of the brain. There are indeed neurons, connected by edges with activation potentials of various kinds. The basic structures are there. We also know about the spatial areas of the brain and what they seem to do.
Do we know everything, of course not. But we don’t have to for that knowledge to be useful.

Philip Stott
Philip Stott
2 years ago

IMHO the most interesting man in AI (and again IMO the most likely to succeed in general AI) is Jeff Hawkins:

https://en.m.wikipedia.org/wiki/Jeff_Hawkins

To massively over simplify, his insight is that general AI (AI that is able to generalise) must exhibit memory of previous patterns learnt whilst also being able to add new ones.
This would avoid the “freezing of connections” that Tom describes in the article.

Norman Powers
Norman Powers
2 years ago

The development of AI over the next 20 years will be the most fascinating area of science.

Maybe. Hopefully? But there has been an AI winter before. There can be an AI winter again.
The big problem with DeepMind is that it has produced very little in the way of useful applications. Sad to say but virtually all the consumer-visible benefits of AI research have been coming not from Team Brit under DeepMind but rather, Team USA under the leadership of people like Jeff Dean and Yann LeCun. Google Brain has led to radical upgrades to most of Google’s products by this point, something they did whilst DeepMind were off playing video games. In other words they’re very good at doing long-range research with splashy PR outcomes. They’re very bad at generating business value.
That’s a problem. Fundamentally, does humanity need human-like AI? It’s not an obvious yes. Humanity needs machines that have machine-like properties, like being much better at humans at a specific repetitive task without developing unrelated opinions or randomly forgetting things. Their predictable and controllable nature is a big part of why machines are useful. Yet, this type of AI research is not on track to produce such machines.
DeepMind is an exceptionally expensive hobby for Google to maintain. For now they can, because the money is there, and there are no clouds on the horizon (well, except for their loss in actual cloud computing, of course). But if the money dries up, or – more likely – DeepMind loses out in some kind of opaque corporate Game of Thrones, then this particular line of AI research may simply evaporate and if it does, it might not return for decades. Most other AI labs don’t seem to care much about reinforcement learning, or trying to train a single über-network without forgetting. They’re far more interested in adjacent topics like better GANs and transformers where there’s some more obvious direct application to people’s lives.

Last edited 2 years ago by Norman Powers
robert stowells
robert stowells
2 years ago

From what Tom is describing the state of AI still sounds quite primitive.  
Is DeepMind really representative of the state of art of AI?
I thought we were more or less ready for autonomous driving which pretty much requires 360 degree awareness on the part of the driver. If AI can only cope with one task then autonomous must surely be some way off.
Also talking about AI saying “quite profound things…about how human learning works” is also spurious.

Norman Powers
Norman Powers
2 years ago

Not quite that simple.
The most advanced self driving cars (Google’s) aren’t actually heavily relying on neural nets, or at least they didn’t used to, let alone a single neural net that is supposed to do everything. Even when they are being used e.g. Tesla has bet big on the DNN approach, there are many different nets trained for single tasks, like identifying traffic lights or pedestrians.
The problem of catastrophic forgetting described here is a problem for making a single polymath DNN that does everything. The assumption is that if you can train such a network then it will integrate all its knowledge in a way that makes the whole more than the sum of its parts. But having lots of independent networks doing tasks that are then integrated by conventional software also works.

Dennis Boylon
Dennis Boylon
2 years ago

More garbage from Chivers. Robots aren’t human. They don’t have human brains. They don’t have emotions. They don’t have passions. They don’t have empathy. You are just looking at problems associated with data collection, analyzing data, drawing conclusions from that data, and creating decision trees based on saving the results from previous conclusions. This has nothing to do with how a human brain works.