Innoculations during the A(H1N1) pandemic in 2009. Credit: AFP/AFP/Getty


March 10, 2020   6 mins

Here are some numbers that are worth knowing. There are about 7.5 billion humans who are currently alive. That’s quite a lot: certainly more than have ever been alive at any point in the past. But it’s probably only about 7% of all the modern humans, Homo sapiens, who have ever lived; there have been about 108 billion of those in the last 50,000 years.

(Depending where you draw the arbitrary line around “modern human”, of course.)

That’s just the beginning, though. What we really want to know is: how many will come after us?

The philosopher Nick Bostrom, building on work by the great Derek Parfit, had a go at answering this question; you can also read about it in my own book. The planet Earth will be able to support complex life for about a billion years. Say that the human population stabilises at a roomy one billion in the next thousand years or so; Bostrom works out that there will be about 10,000,000,000,000,000 people who follow us. That’s 10 quadrillion; about 1.3 million times as many people are currently alive.

Of course, that’s assuming we stay on Earth. If we leave the planet – if, at some point in the next million years or even more, we develop genuine space capability – it gets much more dramatic. If we can build craft that travel at 50% of lightspeed, we could reach about six quintillion stellar systems before the expansion of the universe puts the rest out of our reach; if we can reach 99% of lightspeed, that figure is more like 100 quintillion.

If, says Bostrom, 10% of those stars have habitable planets, each capable of supporting one billion humans for one billion years, then the number of human lives that could be supported is 1 followed by 35 zeroes. I could tell you how many times the current human population that is, but it would involve writing out some very long strings of noughts.

(Where it gets really weird, of course, is if we eventually learn how to build our own habitats from space rocks, so we’re not limited to existing planets; or upload human brains into computers, so we each take up a few picometres of circuitry rather than some number of thousands of square metres of planetary surface. Then you get some seriously huge numbers. But I don’t want to stretch your credulity here.)

So, assuming we avoid killing ourselves or otherwise getting wiped out, there could be a lot of humans, or human-like beings, that follow us. If we think that those humans have any sort of moral value; if we think it is a good thing that they exist; that their joy or suffering matters, even a fraction as much as the joy or suffering of people alive now; then the lives of people who follow us matter an awful lot. 

And what that means is that it really, really matters that we don’t let humanity go extinct. Some disaster – a nuclear war or pandemic, say – that killed 100% of humanity is not 1.0101…% worse than a disaster that killed 99%: it’s incomparably worse; millions of times worse.

The coronavirus outbreak is a disaster in its own right; the death toll could very easily reach the millions. The forecasting site Metaculus, which crowdsources predictions for wisdom-of-the-crowds purposes, has a median estimate of 2.1 million deaths by 2021; its 95% certainty intervals are 210,000 and 27 million. This really could be the most dangerous pandemic in at least 100 years. 

But it’s also a reminder that we could one day face something much worse. 

Researchers at Oxford’s Future of Humanity Institute (FHI), and their Cambridge counterparts the Centre for the Study of Existential Risk (CSER), both look at ways that humanity could be destroyed. Climate change and, should it happen, nuclear war are highly likely to cause enormous damage, but best estimates seem to suggest that they are unlikely to kill us all. We know that civilisation-destroying asteroids can’t happen all that often, because if there was even a 1% risk of one per century, it would be very unlikely that our species would have survived the 500 centuries it has so far.

Both CSER and FHI consider pandemics — specifically, genetically engineered pandemics, released either deliberately or accidentally — to be one of the two most likely ways for humanity to annihilate itself. The other is superintelligent AICOVID-19 is not a genetically engineered pandemic. But it, among other things, can tell us a bit about how prepared the world is for one. The world has a much stronger response now than it did even five years ago; it is more coordinated, it shares data better. But there are still various things that are profoundly lacking. 

The effective altruism movement focuses on the most effective ways to do good in the world, whether through your career or through charitable donations. The charity evaluator Givewell, for instance, suggests that we ought to donate to charities that provide bed nets in countries at risk of malaria, because conservative estimates suggest that you can save a life for a couple of thousand pounds by doing so. But the “long-termist” parts of the movement focus on what we can do to reduce the risk of catastrophic disasters, because even a small reduction in the risk has a huge potential outcome when you’re dealing with numbers as big as those mentioned above. 

Talking on the 80,000 Hours podcast, run by the Centre for Effective Altruism, FHI’s Cassidy Nelson, who studies catastrophic biological risks, lists 12 things that would help stop the next big outbreak. 

At the moment, for example, it takes at least a year to get from new disease to patient-ready vaccine; that is a huge improvement over even a few years ago, but still, the timetable needs to come down enormously, probably via research into generic vaccines that can have the gene sequences of new diseases “plugged in”. Similarly, there are very few broad-spectrum antivirals in the same way that there are antibiotics. Screening for pathogens is very specific at the moment; improving ways to screen people and areas for any unknown new disease is vital. 

Biosecurity, too, on even the most “secure” labs, is far short of what we’d want – the second foot-and-mouth outbreak in Britain was caused when a leaky pipe at a lab dripped active virus into the water supply of a farm. That lab was supposedly biosafety level 4 – the highest level. If it had been doing work on genetically engineered super-smallpox, it would not have been rated higher. And bafflingly, labs are not required to disclose accidents – such as that one – and are incentivised to keep them quiet so they don’t lose their BSL4 ratings. There’s a huge lack of regulation over “dual-use” research – research into things like pathogens that could be used to help people but could also be used to cause harm.

There’s also the problem that synthetic biology – engineering DNA in labs – lacks oversight. In 2006 a Guardian reporter ordered small sections of smallpox DNA from a private lab. It wasn’t enough to create the virus but it should have raised alarm bells. DNA sequencing is much more powerful now; regulation is even more important.

None of these changes will be easy to make. Vaccines and antiviral drugs are not good money-spinners for pharmaceutical companies; encouraging their research will involve government or World Bank support. Perhaps incentivising them by creating research prizes would be a way to do it. Stronger world standards on biosecurity will need global coordination and leadership: labs and countries are often loath to reveal their own mistakes. But every small change we make will slightly increase the chance that humanity gets to enjoy the far future.

It’s worth talking about the huge potential future of humanity, as I did above, because we can get lost, sometimes, thinking about what are, in the end, short-term, local problems – Brexit, Trump, free-speech freakouts at university, trans rights. These things are important but they occupy an enormous amount of our attention, probably a disproportionate amount. But things that are much less fun to argue about get much less attention. Those things are sometimes prosaic, like antimalarial bed nets, and we don’t concentrate on them because it doesn’t involve a big, satisfying fight. But sometimes they’re big and weird, like existential risk; we get nervous of taking a position on them because we don’t want to look mad.

I don’t want to suggest that governments and research funding bodies drop everything and start focusing solely on preparing for the next pandemic; that would be as stupid as not doing anything at all. But I think it’s fair to say that civilisation is not as well prepared for big outbreaks as it could be, and that now is a good time – while everyone’s minds are focused – to improve things. Humans could be around for a billion years, or more, if we don’t screw it up. Coronavirus won’t be the thing that kills us all, but it’s a bloody good illustration of how something could.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers