March 29, 2023 - 6:10pm

Just for once, an open letter that actually matters. This one is signed by some of the world’s top technologists including Elon Musk (Tesla, SpaceX, Twitter etc.) and Steve Wozniak (co-founder of Apple). 

These are people who know what they’re talking about — so when they say they’re worried about artificial intelligence (AI) we should listen. Clearly spooked by the rapid progress made by AI language models like GPT-3 and now GPT-4, they’re calling on “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4”. 

Like William F. Buckley’s definition of a conservative, the signatories find themselves “standing athwart history, yelling ‘Stop!’”. However, that’s just the problem. Even if we do stop in the West there’s no guarantee that anyone else will — least of all the Chinese. To call for a pause in AI development now would be like President Roosevelt halting the development of the atom bomb in 1944 and expecting Adolf Hitler and Joseph Stalin to do the same. 

The signatories of the letter don’t pretend that AI can be uninvented. Nor are they saying that we should just sit on our hands and hope for the best. Rather they want the pause to be used by “AI labs and independent experts” to “implement a set of shared safety protocols” to make “today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal”.

And yet it’s not just today’s technology we’ve got to worry about but tomorrow’s, too. If the equivalents to GPT-5 through to GPT-infinity are developed in China instead of the West, then we haven’t got a hope of understanding the future iterations of AI, let alone managing the risks. We therefore have no choice but to carry on riding the tiger; and though that’s not a comfortable position to be in, it’s better than getting off the beast and asking it not to eat you. 

Western governments must develop the in-house expertise capable of regulating this rapidly evolving technology. For instance, they need to do a whole lot better than Michelle Donelan, the UK’s Secretary of State for Science, Innovation and Technology, who this week told readers of the Sun that “AI is not something we should fear”.

How wrong can you be? We absolutely should fear this technology — even if there’s no avoiding it. Believe it or not, Britain is the world’s third most advanced nation when it comes to AI. We’re a long way behind the US and China, of course — but without the disadvantages of American gridlock, Chinese dictatorship or EU incoherence we could lead the policy response to something that will change the world. 

As a matter of urgency, Downing Street needs to put the most capable ministers and officials in charge of this challenge — and to provide them with the authority to recruit far beyond Westminster and Whitehall. 

Ultimately, it’s not a choice between developing and controlling AI. For the good of humanity, the two must go hand-in-hand. Let them do so in this country.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_