X Close

Was China behind Sam Altman’s ousting? AI is a new front in our civilisational war

A threat to the nation? (Win McNamee/Getty Images)

A threat to the nation? (Win McNamee/Getty Images)


November 25, 2023   6 mins

The blink-if-you-missed-it four-day drama at the tech firm OpenAI requires deep attention. On the surface it looks like power shenanigans; underneath lies a tale of humanity’s future and geopolitics.

The strange saga began a week ago, when the board of the nonprofit decided to fire its AI guru and cultish leader, Sam Altman. But when the 700+ staffers of OpenAI wrote an open letter saying that they too would go with the ousted CEO, he was swiftly reinstated.

The hairpin plot twists of this power struggle have been breathtakingly hard to follow. Reports are now surfacing that Altman had announced that he was on the brink of achieving a significant AI breakthrough only the day before he was fired. A letter was sent to the Board advising them that this tech discovery — an algorithm known as Q* (pronounced Q-Star) — could “threaten humanity”. The algorithm was deemed to be a breakthrough in the startup’s search for superintelligence, also known as artificial general intelligence (AGI), a system smarter than humans.

Altman’s dream was to then marry AGI with an integrated supply chain of AI chips, AI phones, AI robotics, and the world’s largest collections of data and LLMs (large language models). Its working name is Tigris.

To achieve this, Altman would need vast computing resources and funding. Perhaps that is why reports suggest he has been talking to Jony Ive, the designer behind the iPhone, Softbank, and Cerebras — which now makes the fastest AI chips in the world. Cerebras chips are big. The size of a dinner plate and more powerful than any traditional chips.They also have Swarm X software, which allows them to knit together into clusters that create a computational fabric that can handle the massive volume of data needed to build better AI.

Cerebras represents a great threat to the manufacturer of the world’s fastest supercomputers and AI chips, Nvidia. The most powerful of these supercomputers, Summit and Sierra, are central to the defence of the American nation and are kept at the highly protected nuclear facility in Oak Ridge, Tennessee. But almost every big organisation depends on Nvidia chips or computers. A year ago, Nvidia was worth $300 billion; now it is worth $1.35 trillion — the most dramatic increase in the value of a Nasdaq firm since 1971. Yet Cerebras has designed a chip 20-times faster than Nvidia’s. This is why some say the Cerebras IPO will kill Nvidia. Now we begin to see a national-security component to this story.

While the West has been focused on generative AI that has no cognitive ability, China has taken a different path. It has built the world’s only Quantum optical computer, which can solve in 47 seconds a problem that would take a traditional supercomputer 240 years.

Similarly, Altman wants to build a new generation of computers for the AI era. In July, he partnered with Cerebras and the Emirati incubator G42 to unveil the Condor Galaxy, the “World’s Largest Supercomputer for AI Training”. G42 is behind the world’s largest Arabic LLM (Large Language model), which, like ChatGPT, generates new linguistic content, and is also working with Amazon to gather and process DNA information to develop massive new global genomics, proteomics and biobanking services.

To an AI data scientist, the data collected for these innovations — the languages, nationalities and DNA involved — are honeypots brimming with opportunity. The West doesn’t have the mechanisms or the mores to gather such a quantity of meaningful data or the money to finance what Altman wants. But international investors do, including the Emiratis, the Chinese — who are already very interested in Altman — and all the others backing G42.

Moreover, if the world wants AI designed for true diversity, if it wants medical and financial products created to suit the broadest range of humans, then this will only happen where that diverse data can be found. It’s not going to be created in the US, where young, white male tech bros all design AI, and the FDA-approved testing of medicines pretty much excludes anyone but white males.

When AI people say they worry about Altman’s lack of “guardrails”, they mean he is willing to take the risk that he builds something that, like a Djinni, cannot be put back in a bottle. He is prepared to build something that might not be controllable. He is like Oppenheimer who was willing to smash atoms to win the Second World War, even though he might have ignited the atmosphere and incinerated Earth to do it.

By this measure, Altman is a mad scientist who will put us all at risk to achieve a historic breakthrough. Hence the letter to the board of OpenAI by some staff, along with others keen for the world to take note that the huge staff attrition rate at OpenAI was not due to “bad culture fits” — rather, this was due to “a disturbing pattern of deceit and manipulation”, “self deception” and the “insatiable pursuit of achieving artificial general intelligence (AGI)”. For Altman, this “pursuit” entailed incorporating superintelligence inside a robot’s body. Which, presumably, is why OpenAI started investing in humanoid robotics made by IX in Norway back in March.

You can understand why the OpenAI Board might have become uneasy when they realised that Altman was racing around the Middle East and Asia trying to raise billions for this vision. But, as Bloomberg wrote, “these are not “side ventures”, they are “core ventures”. This is about redefining the cutting-edge of chip design, data collection and storage, computational power, and the interface between AI and physical robotics.

This new supply chain would not only challenge America’s IT infrastructure; it could be facilitating the diminishment of US power. It implies innovation is shifting outside of the US and acquiring data that is beyond the reach of regulators. No doubt the US authorities looked at all this and saw Altman collaborating with G42 as equivalent to fraternising with the enemy because it is seen to be backed by the folks who own ByteDance, the parent company of TikTok (G42 has a significant stake in them). They saw that G42 owns Pax AI, which some say is Pegasus (the notorious spyware) reconfigured. Was Altman under surveillance as he pursued this grand vision? How could he not be?

The truth is Altman does not believe in borders. He has one goal: to build the best AI possible. He has a vision that probably worried his board and unnerved Washington. Given that the US is trying to slow down technological innovation in other parts of the world by restricting the sale of the best chips and computers, it is hugely challenging when he says: “We’ll build our own stuff — in fact we’ll build our own supply chain and ecosystem.” So much for ITAR, the US system for banning the export of critical tech.

There was a time when an American would have been arrested for selling such protected high-tech innovations abroad. Today, can you stop a smart American from innovating outside the US? Can you tell entrepreneurs not to take foreign money and not to partner with foreign firms? Can you demand that they stop challenging existing firms like Nvidia? No. Not when others are offering so much money.

What took place over the past week at OpenAI confirms that none of the young leaders of this new AI space see borders. As Andrew Feldman, co-founder and CEO of Cerebras Systems said: AI “isn’t a Silicon Valley thing, it isn’t even a US thing, it’s now all over the world — it’s a global phenomenon”. Yet, the US authorities and its allies will have known the significant challenge to Western notions of security and control that Altman’s vision presents.

Two days after Xi and Biden met on November 15 and agreed to play nice, Altman was fired. The Daily Dot suggests that Altman was terminated because Xi hinted to Biden that Altman’s OpenAI was surreptitiously involved in a data-gathering deal with a firm called D2 (Double Dragon), which some thought to be a Chinese Cyber Army Group. David Covucci reported, “This D2 group has the largest and biggest crawling/indexing/ scanning capacity in the world 10 times more than Alphabet Inc (Google), hence the deal so Open AI could get their hands on vast quantities of data for training after exhausting their other options.”

There is something deeper happening here. China has already forged an AGI path of its own. They are avoiding generative AI in favour of cognitive AI. They want it to think independently of prompts. China is already giving AI control over satellites and weaponised drones. No doubt, Altman would love to work with that capability and the Chinese would love to work with him.

Could that be why Larry Summers, the Former Secretary of the Treasury with connections to US politics and business, also ended up on the new OpenAI Board? Perhaps the US Government realised they could not stop Altman but might claim a seat at his table?

What does all this mean for us? Experts like Altman say they are designing AI to do good. But AI designers disagree about what constitutes “good”. It is clear this arms race could have civilisational consequences.


Dr. Pippa Malmgren was an economic advisor to President George W. Bush and has been a manufacturer of award-winning drones and autonomous robotics.

DrPippaM

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

40 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Benjamin Greco
Benjamin Greco
5 months ago

There is a lot of speculation and unverified facts (bullshit) in this article. If the government confirmed that Altman was conspiring illegally with the Chinese to gather more information for his models, why wasn’t he arrested instead of just being fired? And why was he rehired? If the Chinese were investing in his efforts to create an AGI why would they tip off the Biden Administration that he was conspiring with their military? Why would they tell Biden anything. If D2 was working with Altman it would have been approved at the highest levels of the Chinese Communist Party, that is if D2 even is a part of China’s intelligence operations. Ms. Malmgren’s story doesn’t pass the smell test.
The story that has been reported that last week’s drama was a dust up between factions on the OpenAI board over commercialization versus safety with Altman’s faction winning is probably the truth. But that doesn’t satisfy people who would rather spin wild tales of international intrigue and AI causing our imminent destruction. Ms. Malmgren should be writing for the movies. Maybe she already is, but that’s just speculation on my part.

Steve Everist
Steve Everist
5 months ago
Reply to  Benjamin Greco

Agreed Benjamin, this article is clickbait and not worth the read. Sad to see Unherd drop it’s quality metrics so low here. Honest coverage of the OpenAI board and their unhealthy obsession with Safetyism is the story. The bigger danger is overzealous regulators using narratives from hysterics like Dr. Malmgren as excuse for limiting AGI to a few major players who will work with the Administrative State to control the market and public narratives to serve their limited interests. We saw this with Big Pharma and their undue influence on the American people through their capture of the Administrative State and the public narrative throughout COVID(see Missouri v Biden). Free markets unfettered by self interested bureaucrats, allowing innovation from the competition of a diversity of players, is the safest path forward, not centralized authority over this emerging technology.

John Riordan
John Riordan
5 months ago
Reply to  Benjamin Greco

“If the government confirmed that Altman was conspiring illegally with the Chinese to gather more information for his models, why wasn’t he arrested instead of just being fired?”

The point of the article is that we’ve run headlong into a situation where even if we think there isn’t a national security interest at stake, we’re not even sure if it isn’t an international security issue at stake.

Last edited 5 months ago by John Riordan
Matt Sylvestre
Matt Sylvestre
5 months ago

The West must realize that what ever “guard rails” we try to impose on AI, The East (China) will not necessarily respect. China is driven by its own lights, its own self interest… This stuff is happening no matter if we try to slow it down or not for better or for worse…

Maximilian R.
Maximilian R.
5 months ago
Reply to  Matt Sylvestre

Yes. Seems like this is a passage humanity has to go through; an event that will demand a global discussion at some point in time.

Tom Condray
Tom Condray
5 months ago
Reply to  Maximilian R.

While I admire your optimism regarding “global discussion” the stakes are so high that talking will have no effect on the development of these technologies.
Power–economic and military–as well as the sheer, horrific, potential to control all aspects of the individual lives of people everywhere are the potential goals.

Desmond Wolf
Desmond Wolf
5 months ago
Reply to  Matt Sylvestre

The counter to that (the argument Max Tegmark and Geoffrey Hinton make) is that we do have observed worldwide bans on technologies generally seen to be threatening to humanity – human cloning, biological weapons etc

D Glover
D Glover
5 months ago
Reply to  Desmond Wolf

Do observed bans mean that it doesn’t happen, or that the research isn’t published?
Do we really know that biological weapons aren’t being developed?

Andrew F
Andrew F
4 months ago
Reply to  Desmond Wolf

Very naive take on the world.

John Riordan
John Riordan
5 months ago

I remain convinced that although AI may well be a civilisational threat, the form this threat takes is in the use by hostile states of AI in modern warfare, both of the cold and hot variety. I do not think that humans as a whole will find themselves in a war with AI.

AI is a weapon, not a competing civilisation where the winner will be decided by technological supremacy.

Christopher Barclay
Christopher Barclay
5 months ago
Reply to  John Riordan

The good news is that we have David Cameron with his science background back as Foreign Secretary.

John Riordan
John Riordan
5 months ago

I agree. He’s going to save us all for sure.

Robert Routledge
Robert Routledge
5 months ago

Ha! Ha!

Steve Jolly
Steve Jolly
5 months ago
Reply to  John Riordan

I agree. So much has been made of the existential danger of the very existence of AI that very little heed has been paid to how AI might and likely will be weaponized. In that respect, AI is yet another in an ever expanding array of weapons humans can use to fight, harm, exploit, and kill one another for power or profit.

Christopher Barclay
Christopher Barclay
5 months ago

If Altman thinks that the Chinese don’t believe in borders or nationality, he is very naive.

Robbie K
Robbie K
5 months ago

The Chinese believe in their own borders, not sure about everyone elses though.

William Edward Henry Appleby
William Edward Henry Appleby
5 months ago
Last edited 5 months ago by William Edward Henry Appleby
Jacqui Denomme
Jacqui Denomme
5 months ago

That was interesting! Thanks!

Ralph Hanke
Ralph Hanke
5 months ago

This law reflects an intuition I felt a lot over the last few years of reading question headlines.

Thank you for the link.

Neil Chapman
Neil Chapman
5 months ago

Not sure most folks really appreciate the speed of progress and development in the AI sector. Mustafa Suleyman’ s book “The Coming Wave” is a good place to start.

Mangle Tangle
Mangle Tangle
5 months ago
Reply to  Neil Chapman

There’s a hell of a bandwagon, for sure. But the US government wouldn’t need to be so circumspect if it genuinely felt Sam was imperilling national security. And why would Xi tell Biden about Altman Chinese connections? This whole piece is a bit of a stretch.

Thomas Wagner
Thomas Wagner
5 months ago
Reply to  Mangle Tangle

And would Biden understand what he had been told? Evidence suggests no.

Alex Colchester
Alex Colchester
5 months ago

Humans are very poor at predicting things. In the 1950’s it was assumed that a computer would never play chess better than a grandmaster but we would all be served in our houses by robotic butlers. Solving Chess was easy but it’s still impossible to get a robot to cook you dinner. Whatever everyone thinks AI is going to do, it probably will fail at. What we need to focus on is what we think it won’t be able to do.

William Edward Henry Appleby
William Edward Henry Appleby
5 months ago

I’m mildy optimistic that a robot will cook my great-great-great-great-granddaughter’s dinner.

Clare Knight
Clare Knight
5 months ago

I’m sure a robot could be built to cook a dinner but that’s not where the makers are focused.

Alex Colchester
Alex Colchester
5 months ago
Reply to  Clare Knight

If you think the vast and massively funded military industrial complex has not for decades been trying (and failing) to get a robot to look in your fridge and prepare dinner (because if it can solve that randomly presented spatial riddle, then it can also easily break into your house and kill you) then you are confidently deluded.

Mike Downing
Mike Downing
5 months ago

Would that be a known-unknown, or one of those unknown-unknowns ?

Alex Colchester
Alex Colchester
5 months ago
Reply to  Mike Downing

I like to think of it as a ‘deluded confident impossible’
Similar in the vein to the old belief that a human could never travel faster than a galloping horse

Last edited 5 months ago by Alex Colchester
Douglas Redmayne
Douglas Redmayne
5 months ago

You need to keep up. Robot servants are very likely once AGI arrives. This is expected within 3 years.

Alex Colchester
Alex Colchester
5 months ago

You’ve completely missed my point. It was in the 1950’s that everyone was convinced they were imminent. Your implication they are 3 years away means they have come 76 years after they were ‘just around the corner’.

iambic mouth
iambic mouth
5 months ago

There are many claims in this article tgat aren’t true. Quantum simulator, not even quantum computer can perform calculations at this speed, and these simulators are all over the world. To get diverse data samples we need to use Chinese or Far East data? Does the author know how Chat GPT was even trained? And, the white male hypothesis in clinical trial data has been disproven times and times again.

UnHerd Reader
UnHerd Reader
5 months ago
Reply to  iambic mouth

Agreed 100%. The “white make tech bros” straw man is a useless and fictional identity politics construct. But even if it were true, does she suppose that these guys would be too stupid to understand this and develop a workaround?

We should also question her premise about AI and “true diversity,” lest we once again drift into fuzzy-headed woke slop. DOES and SHOULD the world want “AI designed for true diversity”? DOES and SHOULD it want “medical and financial products created to suit the broadest range of humans”? Is that really the best path forward? Maybe so, but it’s worth asking.

Last edited 5 months ago by UnHerd Reader
Peter Principle
Peter Principle
5 months ago

The “threat” of AI is not that it can out-smart us, but rather that we have been bamboozled into believing that it is smarter than it really is, so we become over-reliant on it.
Most Unherd readers will have experience of colleagues who sound highly knowledgeable and convincing (especially to the boss), yet who are unreliable. Well, that is my impression of the technology. When it makes a mistake and you correct it, it memorises your correction, but it is not so good at making appropriate changes to the reasoning that led to the mistake.

Simon Boudewijn
Simon Boudewijn
5 months ago

no, it is an existential threat – at the lease will destroy family and values.

Saul D
Saul D
5 months ago

The Q* (Q-Star) work is coming in hot from what I read, as it seems to accelerate problem solving and enhance what looks like conceptual thinking. So, for instance, ChatGPT sounds fabulous but actually isn’t that great at consistent logic or arithmetic without help (think an English Language Student opining on an engineering problem).
However, as I understand, Q* shortcuts some type of model generation processes and this seems to boost conceptual understanding, so it ‘gets’ logic and maths better and faster, and this is likely to improve over time.
Consequently, AI has the real potential of getting smarter than us and what we know now, with the potential of pushing beyond our current knowledge boundaries.
The challenge is that AI may become the equivalent of an autistic savant – huge intellectual skills, but struggling with empathy and human-sensitivities. If AI has power or influence (event through recommendation systems) that could put people at risk. What seems logical is not always right.
However, for us, like Pandora, this is not a box we can close. Our skills at spotting, judging and interacting with AI-generated content are only just beginning, and our skepticism about received opinion needs to go up another couple of orders of magnitude.

Andrew Thompson
Andrew Thompson
5 months ago

Yes let’s all ban all research into this dangerous thing until we understand it a bit better. Okay, great idea we won’t research if you promise not to research. Okay then deal….(Research all round it is then guys)

Felix Hornoiu
Felix Hornoiu
4 months ago

We will look back on this and laugh hard. I find this a tech version of Joe Rogans MMA episodes, lots of gossip and speculations over facts that can’t be verified.
Whenever there’s a rumble into some (corporation) issue it’s easy just to follow the money. That’s it!

Last edited 4 months ago by Felix Hornoiu
Kolya Wolf
Kolya Wolf
4 months ago

More intelligence is a Good Thing.

Douglas Redmayne
Douglas Redmayne
5 months ago

Hopefully the American middle class tech bros of the US will succeed in generating AGI first and creating its vase code otherwise a Woke or a Chinese AGI will be bad news.

Last edited 5 months ago by Douglas Redmayne
Simon Boudewijn
Simon Boudewijn
5 months ago

Just for fun, some Poetry, and Biblical reference to this approaching horror:The Second Coming By W B Yeats
Turning and turning in the widening gyre   
The falcon cannot hear the falconer;
Things fall apart; the centre cannot hold;
Mere anarchy is loosed upon the world,
The blood-dimmed tide is loosed, and everywhere   
The ceremony of innocence is drowned;
The best lack all conviction, while the worst   
Are full of passionate intensity.

Surely some revelation is at hand;
Surely the Second Coming is at hand.   
The Second Coming! Hardly are those words out   
When a vast image out of Spiritus Mundi
Troubles my sight: somewhere in sands of the desert   
A shape with lion body and the head of a man,   
A gaze blank and pitiless as the sun,   
Is moving its slow thighs, while all about it   
Reel shadows of the indignant desert birds.   
The darkness drops again; but now I know   
That twenty centuries of stony sleep
Were vexed to nightmare by a rocking cradle,   
And what rough beast, its hour come round at last,   
Slouches towards Bethlehem to be born?
n/a
Source: The Collected Poems of W. B. Yeats (1989)

KJB Revelations 13

”The Beast Out of the Earth
11And I beheld another beast coming up out of the earth; and he had two horns like a lamb, and he spake as a dragon. 12And he exerciseth all the power of the first beast before him, and causeth the earth and them which dwell therein to worship the first beast, whose deadly wound was healed. 13And he doeth great wonders, so that he maketh fire come down from heaven on the earth in the sight of men, 14And deceiveth them that dwell on the earth by the means of those miracles which he had power to do in the sight of the beast; saying to them that dwell on the earth, that they should make an image to the beast, which had the wound by a sword, and did live. 15And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.
The Mark of the Beast
16And he causeth all, both small and great, rich and poor, free and bond, to receive a mark in their right hand, or in their foreheads: 17And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name. 18Here is wisdom. Let him that hath understanding count the number of the beast: for it is the number of a man; and his number is Six hundred threescore and six.’;;

Altman….