September 30, 2021 - 2:37pm

About 18 months ago, I went to DeepMind’s offices in King’s Cross to meet the head of their robotics lab, Raia Hadsell. I have been obsessed with DeepMind for years, so I was pretty excited. The piece I wrote about it has just been published; it was delayed because of this nasty bug that’s been going around. I wanted to talk about it because DeepMind’s work says quite profound things, I think, about how human learning works.

There’s been incredible progress in AI over recent years, but robotics has been slower. Partly that’s because you can train an AI on billions of pictures scraped off the internet, but if you’re training a robot to pick up a cup, you can’t make it do that billions of times, because that would take centuries. Training AIs takes lots of data, and that data is harder to come by when you’re getting it from real-world actions that take real-time seconds.

But there’s a deeper problem, which I found fascinating. Most modern AIs — whether robots or face-recognition software or whatever — work on neural networks. To oversimplify: a neural net consists of a load of interconnected nodes, a bit like our brain’s neurons. Each node will “fire”, send a signal to the next set of nodes, if it receives a strong enough signal from the nodes below it.

Say you want to train an AI to recognise images of cats and dogs. You show it millions of pictures, it tries to classify each one as “cat” or “dog”, and you tell it if it gets each one right or wrong. Each right or wrong answer trains the AI: it will change the strength (“weights”) of the connections between the nodes. Eventually it will get the weights near-perfectly set up, and will be brilliant at identifying dogs.

But then you try to train it to recognise buses and cars. You give it millions of pics of buses and cars, it reweighs its connections, and becomes brilliant at recognising buses. But now you show it a picture of a cat, and all its connections are changed, and it’s useless. Hadsell would train an AI to play Pong, winning every game 20-love, but if she trained it then to play Breakout, it would forget Pong, and lose 20-love every time.

This is called catastrophic forgetting. And a lot of work in modern AI, especially in robotics, because most robots need to be able to do more than one thing to survive in a complex environment, is about finding ways around it. The methods are complex — I go into them in some detail in the piece itself if you want to learn more — but often they involve partially freezing some of the connections that are most important for a given task.

What that means, though, is that as the AI learns more tasks, it has fewer and fewer unfrozen connections. It becomes more competent and skilful, but less able to learn. As a “young” AI, it is incredibly flexible but kind of useless; as it gets “older”, it gains more skills but finds it harder to make new ones.

Which is very familiar. Our own brains have lots and lots of dense, weak connections when we’re young, and those connections are pruned over time as we learn, becoming stronger but sparser. It means we are fantastic at learning as children — see how we pick up languages. But as we get the skills, we find it harder to pick up new ones. Hadsell even speculated that this was why we don’t remember things from our early years: our brains’ connections are so weak that they can’t form episodic memories: “Everything is being catastrophically forgotten all the time, because everything is connected and nothing is protected.”

Neural networks are explicitly modelled on our own brains. From the start, AI (and DeepMind in particular) has learnt from neuroscience. But there’s also a lot of cross-pollination: discoveries in AI tell us things about how thought and rationality work, and may tell us a lot about our brains in particular. The development of AI over the next 20 years will be the most fascinating area of science.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers