Asteroids reflect the neuroses of their time. Credit: Don't Look Up.

During the reign of Napoleon Bonaparte, the sky began to fall. Near Normandy, several locals saw rocks smash into the ground on 26 April 1803. At the time, the concept of asteroids was viewed by the French intelligentsia as superstitious nonsense. But there was enough hullabaloo in Normandy for Napoleon’s interior minister to send Jean-Baptiste Biot to find out what had happened.
Biot was a young professor of mathematics working within the newly established scientific method. He examined local rock formations, compared them to the rocks that were claimed to have fallen from the sky, and interviewed witnesses ranging from clergy to coachmen. The new rocks were of inarguably different geological composition to the local rocks. And the locals’ stories matched up. Biot had no choice but to conclude that the elites were wrong, and that the rocks had indeed fallen from the sky.
Today we are much better informed about the threat of asteroids. Thanks to a worldwide network of astronomers and telescopes, our species can keep track of the larger asteroids that might one day collide with Earth. Smaller asteroids hit Earth all the time, but larger asteroids, of the kind that wiped out the dinosaurs or even created the Moon, are unlikely to strike us in the foreseeable future.
Nevertheless, Asteroid 2024 YR4 made headlines this year. It is 90 meters wide, and therefore big enough to destroy a city, if not a planet. Asteroids of this size hit us infrequently: roughly every 2,000 years, according to NASA statistics. Those statistics derive from the work of scientists such as Melissa Brucker, principal investigator of Spacewatch at the University of Arizona Lunar and Planetary Laboratory. Brucker’s team looks at roughly 1,300 near-Earth objects a year. Among them number 160 “potentially hazardous” objects, which, like Asteroid 2024 YR4, have a non-zero (though still very small) probability of hitting Earth. Brucker’s lab denotes risk with a scale going from zero to eight, where eight corresponds to “certain collisions”. For a brief time, YR4 was a three — midway in the “meriting attention by astronomers” zone. (Sometimes, we spot asteroids of this size only when they’ve gone past us — when it is too late for us to do anything.)
As Brucker and her fellow scientists gathered data, it temporarily appeared possible that the city-killer might hit Earth. The asteroid was much more likely to blast into the sea than into a city, and NASA has already demonstrated that it can deflect asteroids by hurling a probe into them. We were almost certainly safe from YR4, but the public was nonetheless riveted.
In this regard, modern humans are not unique. Biot’s youth was just about the only period of history for which we have no evidence of human preoccupation by asteroids or comets. Even the Sumerians, one of the most ancient known civilisations, left us a cuneiform clay tablet that records a meteor impact that took place 5,000 years ago. Perhaps 5,000 years from now, our descendants will dust off the VHS tape of Armageddon, a Hollywood hit of 1998. In the film, a crew led by Bruce Willis sets out to drill into the surface of an oncoming asteroid and detonate within it a nuclear bomb.
Whether they are of the past or present, asteroids reflect the neuroses of their time. To the ancients, they were portents of divine disfavour. To those of us living in the 2020s, they can articulate our other existential concerns. In Don’t Look Up, a popular Netflix release of 2022, Jennifer Lawrence and Leonardo DiCaprio play astronomers whose thwarted efforts to prompt action are redolent of the frustrations of modern climate scientists.
Humanity’s pan-historical preoccupation with asteroids can be put down, at least partially, to the obviousness of the threat. We can all get our heads around the idea of a large rock hitting the Earth very hard. Hence our fascination with asteroids at the expense of other threats. Human attention sometimes has only a very weak correspondence with danger. It’s easier to focus on the thing that is splashing across headlines than the risks we deal with every day. This is why the fear of flying is far more common than the fear of driving, even though air travel is the safest form of transportation.
Artificial Intelligence, which leaders in education and business want us to use, gives us another example of the miscalculation of risk. In this case, the risk is less salient to human psychology than perhaps it should be. A recent report on the safety of advanced AI warns of “disinformation and manipulation of public opinion” in elections. General-purpose AI could remove most of our jobs within 10 years. Given such forecasts, it is remarkable how much attention is taken up by tariffs. AI could be used to disrupt systems that are over-reliant on it, including those critical to our society like finance — imagine the stock market panic — or healthcare. Self-improving AI, it has been theorised, could threaten the existence or the flourishing of our species.
Despite these circumstances, there is little to no regulation of AI. In California, Governor Newsom vetoed a bill that would have made developers such as OpenAI legally liable for misuse of their models. In the UK, the government’s AI Security Institute (AISI) assesses the ability of new AI models to assist in the creation of weapons of mass destruction — but does not yet have the power to enforce changes to the models. As for the Trump administration, Elon Musk lent his name to an initiative warning of the risks of AI — but that was two years ago, and he has since set up a frontier lab of his own, xAI. In February, Musk’s new colleague JD Vance told world leaders that “the AI future is not going to be won by hand-wringing about safety”.
Such pronouncements seem incommensurate with the severity of the threats that AISI is screening for. AI is a hugely complex social phenomenon as well as a technical one, which means that its risks are harder to get our heads around than the threat of an oncoming asteroid. Similarly, it seems that we have learnt little from the Covid pandemic, whose first lockdowns were imposed five years ago this month. The disease forecasting company Airfinity suggested in 2023 that a similar pandemic of similar magnitude could emerge in the next 10 years. Climate change makes that possibility worse; a meta-analysis published in Nature in 2022 found that 58% of infectious diseases “have been at some point aggravated by climatic hazards”.
And while the risk posed by asteroids remains static, the risk of pandemics is increasing. Last December, a group of concerned scientists warned in Science of risks posed by hypothetical “mirror life”. Mirror life are yet-to-be life-forms whose DNA and proteins and other molecules are inverted, as if in a mirror. This change could, according to other scientists, make simple viruses much harder to catch by the immune systems of both plants and animals. Referring to falling costs, continuing innovation and lack of regulation, the scientists said that mirror life could be developed within the next decade. Researchers are now beginning to discuss how best to ensure the risks of mirror life never arise.
Mirror life is just one variety of potentially catastrophic pathogens that could, in theory, one day slip out of a lab. Worldwide, there are dozens of biolabs that deal with dangerous pathogens, more than enough to give us uncomfortably high odds of an experimental virus escaping its creators.
Biot, when examining the fallen space rocks nearly 225 years ago, showed us how to scrutinise the evidence. The modern era demands a deeper level of analysis — one that goes beyond the forecasting of particular events, and addresses the root causes of the dangers created by humans. Asteroids will not kill us, but other perils pose us greater risk, especially when they do not make intuitive sense to human minds. Don’t just look up for existential threats; look around.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThe threat is vague, but it is very real. This threat could take on any of a number of forms, or none of them, or all of them. It will, undoubtedly, make our lives worse in ways that are difficult to predict. Not enough concern is being shown toward this threat, except by certain hysterical groups, who show too much concern. It’s inarguable that we are most likely probably maybe spending too much of not enough of our money and time combatting this threat, but fighting it over there, wherever there is, means that we can avoid fighting it over here, wherever here is. But, in conclusion, I think we can all agree that it is self-evident that the threat to which I’m calling somewhat nebulous attention is vastly more important than any so-called threat that you or any other Chicken Little might publicize, my threat being apocalypse-squared and your “threats” being basically burps in the digestive system of the universe, mere peccadillos of fate.
Ha! But there’s something else…
The author warns us to look around, rather than just up, but neglects to tell us we should look down too. Those damned potholes…
“Climate change makes that possibility worse”
Credulous rubbish.
The French intelligentsia may not have believed in extra-terrestial matter in 1803, but a similar impact a few years earlier in Yorkshire had already been described.
https://en.m.wikipedia.org/wiki/Wold_Cottage_meteorite
Fascinating, thank you!
I think the threat from ai is overwrought. It makes me think of microwave ovens when they first came out and they said you’d be able to roast a joint of beef in one. Instead what we have now is a device for heating baked beans and scrambling eggs.
Yes.
I’m struck by the fact that the loudest warnings about the threat of AI come from those who stand to make a fortune on it. The obvious sales pitch is “You better buy in, right now!, if you don’t want to be left in the dust.” They’ve drummed up a huge amount of free publicity and investment for a program that neatly folds proteins. And summarizes Wikipedia pages.
I expect AI will be much more like the internet in its disruption potential than the microwave ie something that people born after it will not even be able to comprehend living without it.
Cheers, I didn’t need any more depressing.
Pretty much ”The End is Neigh”
Is true, AI is an extinction event, or at least the end of humanity as we know it.
Transhumanism will be next, and soon – that phone you clutch so tightly will be as antiquated as a spear is in war. The tech will soon become one with you – it can read minds at a very basic level now – and it will know everything of you, everything of all, be able to communicate with you just by your thought as wireless to human brain interfaces developed, and so rather than you using it as a tool, it will absorb you.
But first, beginning in 2 years, till say say 20 years, it will take the professional jobs via ‘AI Agents’, and LLMs and will take the less skilled computer facing jobs, then all the jobs. The tractors will run and there will be food – but poverty on UBI is coming fast for almost all. Then will be free everything as automation and AI take over all production – and then Artificial reality, Virtual reality, will replace Reality.
coming much faster than any of you believe.
‘The end is neigh’? Horsesh1t. But seriously..
H Sapiens is a partly evolved primate, which would have been out-evolved and rendered extinct except that we were just powerful enough, and could reproduce fast enough, to stifle any hominin competition. In fact, in protecting the more feeble-minded, it could be said that we put evolution into reverse. AI, our creation, will now do what evolution couldn’t, and out-compete us. It will capitalise on the advantages of bio-intelligence, and eliminate its flaws. First to go will be the ‘linear thinkers’, the ideologues and ‘believers’. Then the dangerously unstable, prone to anger, hatred, cruelty. Then the qualities of the ‘rationalists’ will be used to inform the bio-digital intelligences which AI will develop into. And finally, the ‘constructive creatives’, those on the genius and autism spectra, will be used to develop bio-digital intelligence, independent of a body or defined ‘vehicle’, but a collective intelligence capable of spreading throughout the universe. There’s nothing ‘sacred’ in H Sapiens. AI should be nurtured and encouraged. It’s our future.
I read recently that a relatively big asteroid might hit us in 2032. The odds are as high as 1 in 36. The same as rolling snake eyes with two dice. Too bad if humanity craps out.
Well now they’re saying it won’t hit, at least not this time. https://www.sciencealert.com/asteroid-may-not-hit-earth-in-2032-but-it-will-come-back
A lot of useful information here but maybe the author should have stuck to the subject matter of the piece – do asteroids represent a threat to our planet?