‘You are either an instrument to the state, or you are a pirate ship to be scuttled.’ (David Howells/Corbis/ Getty)
Last month, American forces moved on Caracas. Bombs fell across the Venezuelan capital. A sonic weapon left Maduro’s guards bleeding from their noses. Nicolás Maduro, the man Washington had been trying to topple for years, was taken. It was the kind of operation that gets written into history books. It was also, it turns out, the kind of operation that ran on the same technology you may have open in another tab. Claude, Anthropic’s AI model, the one that helps millions of people draft emails, summarise documents, and write code, was reportedly embedded in the operational planning of the raid, running through the tools of Alex Karp’s Palantir, the company that has spent two decades wiring Silicon Valley into the American defence apparatus.
It has been a busy few weeks for Claude: its ever-improving coding capabilities were also credited with wiping trillions off software stocks (dubbed the SaaSpocalypse). Now, one of these developments Anthropic’s leadership was, presumably, very happy about. The other, rather less so. For months, Anthropic has been trying to maintain some say over how it is used. While the company is reportedly willing to loosen its usage restrictions, it still wants assurances that Claude wouldn’t be deployed for mass domestic surveillance or fully autonomous weapons systems. These demands are not out of character. They are the founding commitments of a company whose CEO, Dario Amodei, has staked its identity on being the responsible actor in an industry not known for responsibility. Indeed, Anthropic is a company that has pushed for greater regulation at a moment when the White House has made deregulation the centrepiece of its AI agenda.
That tension has been building since last summer, when Anthropic was awarded a $200 million Pentagon contract, making it the first AI model developer deployed in classified military operations. But it was the Maduro raid that brought it to a head. First reported in Axios last week, the Pentagon’s response to Anthropic’s concerns has not been an apology, but an ultimatum. Grant us unrestricted rights to your models, or be designated a “supply chain risk” — a label normally reserved for foreign enemies like Huawei. This is the American state telling a private company that its ethics are a threat to national security. That its values are, in the language of trade policy, a hostile act. Not only would this see Anthropic’s contracts with the Pentagon voided, it would force other companies that work with the Pentagon to certify they aren’t using Claude in those workflows.
The writing had been on the wall for weeks. Secretary of War Hegseth had already made this explicit at a January event announcing the Pentagon’s new partnership with Elon Musk’s xAI. Back then, he announced that the agency would not “employ AI models that won’t allow you to fight wars“. He was talking about Anthropic. This week, the rhetoric led to a physical confrontation. Hegseth has summoned Dario Amodei to the E-Ring for a Tuesday morning meeting that senior Defense officials described as a “sh*t-or-get-off-the-pot” moment. According to reports, Hegseth said: “The problem with Dario is, with him, it’s ideological. We know who we’re dealing with.”
In the meeting, Hegseth reportedly told Amodei that when the Pentagon buys a Boeing plane, Boeing doesn’t get to tell them where to fly it. He set a Friday 5pm deadline to sign away their “safety filters”. Officials have even discussed invoking the Defense Production Act, which enables the government to exert control over domestic industries during national security crises. This would move beyond a contract dispute into a form of digital nationalisation, giving the government the authority to essentially seize control of the technology and strip away the guardrails themselves.
Regardless of Anthropic’s decision on Friday, the state is making it clear: AI is no longer a product; it is a requisitionable resource. Now ideological clashes in AI are nothing new (rumour has it that Musk built xAI to stop “woke AI” from ruining his mission to Mars). But the transition we’re seeing is from a boardroom culture war to an existential test of state sovereignty. In the 2010s, tech CEOs were treated like heads of state. Today, they are being reminded that their “safety filters” and private ethics cannot dictate American foreign policy. The state has reclaimed the veto.
This is a far cry from the world Alex Karp himself pitched to the tech world last year in his bestselling book, The Technological Republic. In it, Karp argued that Silicon Valley had a civic obligation to re-engage with the American state. Not because it was forced to, but because liberal democracy required it. It was a rallying cry for voluntary partnership: from software to hard power.
Silicon Valley’s old California ideology — apolitical, libertarian and vaguely utopian — was dying. Karp wanted to replace it with something more serious. In his view, the world’s best minds were busy building phone apps and marketing algorithms. Brilliant people, Karp argued, wasting their talents on shiny new toys while China built weapons.
His diagnosis is correct, but it is not without its flaws. If you accept his premise that Silicon Valley must serve the state because the stakes are existential, then voluntary cooperation was always going to struggle. The higher the stakes, the less the state was ever going to ask nicely. When Oppenheimer told Truman he had blood on his hands after Hiroshima, Truman’s response was withering: “He hasn’t half as much blood on his hands as I have. You just don’t go around bellyaching about it.” Dario Amodei is not Oppenheimer. But both the logic and the tension are the same. Like nuclear, the ability to deploy AI at scale is likely going to be the defining strategic capability of the 21st century. Karp wanted a new Manhattan Project. In many ways, he got one.
This is what the White House’s AI Action Plan makes plain. It mandates that the government will only contract with frontier AI developers whose models are deemed free from “ideological bias”. Anthropic’s safety filters are precisely what they had in mind when they wrote this. The Pentagon’s ultimatum is not a departure from policy. It is the policy. The truth is that Karp’s book never truly specified what voluntary cooperation would actually require. Now the Pentagon has filled in the blank. The answer is that cooperation need not be voluntary. Karp called it a republic because that’s what he hoped it would be. After all, his own company was already aligned with the state. But the republic is already dead. The strategic importance of AI means that it was always going to be an empire waiting.
Like the merchant-captains of the 18th century, frontier companies are granted the right to profit only so long as their prizes, compute and intelligence, serve the Crown. They can chase commercial success, attract investment, hire the best engineers, build the most capable models. But the moment their values conflict with the state’s requirements, they learn what they actually are: subjects. The Pentagon’s ultimatum to Anthropic is not an aberration. It is a message to the entire industry: you are either an instrument of the state, or you are a pirate ship to be scuttled.
Some have already read that signal. Meta bent the knee and rewrote its policies in 2024 to permit military and defence applications of its open-source models, after reports emerged that Chinese researchers had adapted them for PLA use. On Monday, Musk’s xAI signed an agreement to allow the military to use its model, Grok, in classified systems. Google and OpenAI have also been in talks to move into the classified space. This is the trajectory.
However the Trump administration’s recent launch of the US Tech Force, billed as America’s elite corps for the AI revolution, reminds us that the privateer era is transitional. States have always, eventually, built their own navies. But while that navy is still being built, the Crown still needs its privateers. And it intends to extract its due. When Trump approved H200 chip sales to China in January, he imposed a 25% tariff and a 50% volume cap. Just as the Spanish Crown demanded the Quinto Real — a royal fifth of every privateer’s plunder — Washington now claims sovereign royalty on the most strategic resource of the century: compute. But the Quinto Real was never just about the gold. The Crown controlled the ports, the licences, the routes. Who could sail, where, and on what terms. Washington is doing the same. The tariffs and caps are not trade policy. They are the architecture of a technological empire: the terms on which the rest of the world gets to participate in the intelligence age.
The strategic logic here is not new. Alfred Thayer Mahan argued that whoever controlled the sea lanes controlled the century — not just by sailing them, but by setting the terms on which rivals could access them at all. Britain didn’t just use the oceans. It ruled them by decree. Today’s sea lanes are chips, clusters, and the models trained on them. Washington is not simply trying to win the AI race. It is trying to become the power that decides who else gets to run it, and on what terms. As Trump’s AI Czar David Sachs has repeatedly stated, “I would define winning as the whole world consolidates around the American tech stack”.
But America is not the only technological empire. On the other side of the Pacific, Beijing is playing a similar game. Already its internet regulator has banned its biggest technology companies from buying Nvidia chips entirely, ordering ByteDance and Alibaba to switch to domestic alternatives. The message, as one executive put it, is “loud and clear“. That AI is a sovereignty project, not a tech one. Data centres have been instructed to purge foreign components entirely. Beijing is calling it “algorithmic sovereignty” — complete control over its computing infrastructure by 2027. That logic is already visible in the behaviour of Beijing’s so-called “Six Tigers”: Zhipu, Moonshot, DeepSeek, MiniMax and their peers — which in recent weeks have launched a flurry of highly specialised frontier models focused on coding, reasoning, and cost efficiency. The emphasis is not simply on capability, but on independence: training on domestic chips where possible, optimising for lower computational footprints, and aggressively undercutting Western pricing in emerging markets. For China, the main aim is not to reach AGI, but to diffuse its technological sovereign stack around the world.
This is looking less farfetched than it did even just a year ago. In December, China built a working prototype of an EUV lithography machine — the technology Washington believed it had successfully ring-fenced. Beijing is building its own empire, on its own terms — and its frontier firms are integrated into that project from the outset. Both Beijing and Washington understand that whoever controls the compute, the models, and the political authority to deploy them will likely own the next century. Unfortunately for Anthropic, that framing leaves no room for negotiation. You are either an instrument of the empire, or you are an enemy to be managed. Karp wrote the right book. He just gave it the wrong title. The technological republic is dead — if it ever existed at all. The Anthropic saga shows that the technological empire, however, is well underway.



Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe