July 10, 2019   5 mins

Most AI researchers think that, probably in the next few decades and almost certainly in the next few centuries, we will build something that is cleverer than us. The question is: what will happen after that?

My book The AI Does Not Hate You, covers some people’s attempts to answer that question. They think that there is a strong chance that it will all go wrong in some profound ways – ways which could lead, in a worst-case scenario, to human extinction.

And it might not happen the way you expect – by the AI breaking its programming or ‘going rogue’ – but simply by the machine doing exactly what it was told, in some highly literal and unpredictable way.

In his new book, Novacene: The Coming Age of Hyperintelligence, James Lovelock – one of the great names of 20th-century science, and the creator of the Gaia hypothesis that Earth should be treated as a single self-regulating organism – argues differently. His key idea is that the Anthropocene, the informally defined geological era characterised by human influence, is coming to an end, and we are entering the “Novacene”, which is not a brand of dental anaesthetic but a new epoch characterised by artificial intelligence.

We will be replaced by a new form of life, says Lovelock, which will think many thousands of times faster than us, and which may suffer us to live – or alternatively may not. But either way, it will carry the torch for sentient life. Eventually, he says “organic Gaia will probably die”, replaced by an electronic ecosystem.

The book leaps through a potted history of life, the universe and everything to get there, but its short chapters are filled with hugely confident claims about things that the author doesn’t feel the need to back up. “It is clear” that life can only have evolved once in the observable universe, Lovelock breezily tells us in the first few pages: “our existence is a freakish one-off”.

But, of course, that’s a huge and ongoing fight. You need to do more work to make claims of that magnitude. Similar things happen every few pages and with some odd leaps of logic, too. For example, in the early 19th century, the discovery of anomalies in Mercury’s orbit led some to posit the existence of an unseen planet, Vulcan. A hundred or so years later Albert Einstein instead showed that the answer was that Newtonian physics wasn’t quite accurate when dealing with very high speeds and masses, and used that as the basis of his new relativity theory.

But for Lovelock, the Vulcan hypothesis was a failure of “cause-and-effect logic” – science, he says, should rely more on intuition. But Einstein, of course, worked it out with maths! I’m sure he had intuitive leaps, but the discovery of general relativity is (to me, at least) an example of using “cause-and-effect logic” with astonishing brilliance and with world-changing results.

All of these things may be defensible positions. But – unforgivably for a science book – there are no references, no footnotes, no way of checking his positions at all beyond googling relevant-sounding phrases. At one point he drops in as an aside that “extraordinarily, bumblebees have been seen to play football”, which is the sort of sentence that makes you do a double-take.

I googled it and it turns out that honey bees (not bumblebees) can be trained to pull a ball into a “goal” to win a nectar reward, and other bees learn it more quickly by watching, and inevitably that became “bees play football” in the popular press. It’s such a minor detail, but it annoyed me.

And when it gets to the subject of AI, that lack of ballast and rigour makes the book essentially useless as a guide to the future.

Lovelock does have lots of interesting ideas – some of which I agree with entirely. For instance, progress, howsoever measured, is speeding up; it took millions of years to go from lizard to seabird, but only a few decades to go from the Wright Flyer to the Concorde. Economic growth or information-processing speed has also got faster in this exponential way: technological improvements make new technological improvements easier, so the AI future could go through enormous revolutions in hours rather than years.

But lots of it reads like some free-associating Alan Ginsberg fever dream – Lovelock imagines animals “plucking freshly charged batteries from solar-powered trees”, and “huge transmitters sited at the poles broadcasting junk mail, unwanted advertisements, banal entertainment and misinformation” into space to keep the planet cool, or cyborg scientists exhibiting humans like we do plants at Kew Gardens.

He says that any future AIs will have to keep the Earth cool because they, like us, won’t be able to survive if it gets much warmer. But if the AIs are as powerful as he suggests, then it’s not clear that they’ll need to keep the Earth in any sort of recognisable form at all. What’s to stop them from clearing out the inconveniently warm atmosphere altogether, or for that matter turning the entire Earth into memory banks and processors?

This idea that “hyperintelligent” AIs will need to maintain a Gaia-like homeostasis definitely needs more explaining. It’s intriguing enough, but, with no references, I can’t see what Lovelock is basing it all on, other than his own musings.

Most importantly for the Novacene hypothesis, though, is his optimistic idea that these AIs will be the next generation of intelligent, aware beings – “understanding creatures” that allow the cosmos to contemplate itself, as he says. (Reminiscent of the Niels Bohr quote, “A physicist is just an atom’s way of looking at itself.”)

That could be what happens; but in the end, AIs will be computers that do what we tell them to do, and whether that ends well or badly depends entirely on how we build them and what we tell them to do.

It may well be that you can build amazingly powerful AIs which have no consciousness, no moral value in their own right, but are just very good at completing the tasks we give them. If we are replaced by machines like that, then they could whir away and do astonishing things, colonise the galaxy, solve scientific questions and build wonders, but there would be no conscious beings to observe it or care about it. None of it would matter. It would, in the words of the philosopher and AI theorist Nick Bostrom in his own book Superintelligence, be “a Disneyland with no children”.

That’s not inevitable either, of course. And nor are any of the other scenarios – doom, utopia, human zoos. The problem is that Lovelock seems to think that Novacene is inevitable. I think his fundamental problem is, ironically, that he still essentially thinks of AIs as human – not in form, but in mind; they may think faster, but they will still care about the things we care about, they will still get bored and impatient, they will still be individually distinguishable beings rather than avatars of some greater hive-mind.

The AI future could transform human prospects in profound ways – as extreme as ending world poverty and illness, if it goes well, or exterminating us all, if it doesn’t. It is absolutely vital that we spend time thinking about it, and trying to make the first scenario more likely than the second.

But if, like Lovelock, we assume that it’s all inevitable – whether positive or negative – then we abdicate our responsibility for trying to make that future positive rather than negative.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers