2 March 2026 - 7:00am

A key lesson of this weekend is to be careful who you go to war with. Just past 5pm on Friday, the deadline for an agreement between AI giant Anthropic and the US Department of War elapsed. Donald Trump posted on Truth Social: “We don’t need it, we don’t want it, and will not do business with them again!” Accusing the company of being “far-Left”, Trump said that Anthropic would be denied access to all government contracts as a “security risk”. It was announced later that day that Defense Secretary Pete Hegseth and his department had reached an agreement with OpenAI for the government to use its model instead.

Anthropic had expressed displeasure over its systems being used for mass surveillance of US citizens, and in “autonomous killing machines”. The company was particularly concerned that the decision to pull the trigger was being made without reference to a “human in the loop”.

The Department of War didn’t appear to understand that it is only a consumer here. It has only three frontier models from which it can purchase: Anthropic, Gemini and OpenAI. It could no more magic up its own version of pin-perfect AI than it could make its own iPhone. Stepping into the breach are the noble servants to power at OpenAI — a deal that was always going to be more favourable anyway

For a start, Anthropic already has consistent positive revenue streams. This is down to the tens of thousands of Claude Code users who are happily paying $200 a month for access, plus much more in API calls. OpenAI has not yet cracked that part of the market. For CEO Sam Altman, any way to gain more revenue is therefore vital. Not so with Anthropic CEO Dario Amodei, an unusually philosophical figure for a tech titan.

Clearly, Anthropic is in this for the very long haul. It sees this less as a business system and more as a new force acting in the world with substantial levels of power. The sci-fi world of autonomous killing machines and total surveillance is already hardwired into its outlook. Meanwhile, the Department of War is looking to shape the industry in its own direction, given the carrots and sticks it possesses.

Hegseth, though, hasn’t fully comprehended that the US government may actually have less power. Owning a more human-moulded, more finely graded GPT — the kind that grades the inordinate complexity of war — is precisely what the Pentagon should be seeking. Instead, it has chosen a less zippy model in ChatGPT. But in this coming world, a distance of milliseconds matters.

We are now in a moment where the often tedious questions of “AI safety” are starting to hit real-world applications. The laws that bound the DoW to certain limits on surveillance were designed for an era where the constraints were physical. It was simply not possible for the state to aggregate information on every individual simultaneously; now it is. We will need new laws, ones which incorporate many of the ideals that AI safetyists have been championing.

But these are not the current concerns of the Department of War, or of the wider Trump administration. The President and his allies are looking for companies that he can get on side to do his bidding and develop military efficiency, without moral qualms. This is where OpenAI comes in. Desperate to obtain influence in government, as well as drive revenue, Altman’s company is now firmly in the fold.

That’s why when Altman announced the deal, he needed it to look like a victory. In fact, the terms he secured were essentially identical to the ones Anthropic had been demanding all along. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” Altman wrote. “These are our main red lines… we share them with Anthropic.”

In the speed and clarity with which it has settled this tête-à-tête, the Trump administration has followed classic Silicon Valley principles — move fast and break things. What it doesn’t yet understand is that other famous Silicon Valley principle: have a 50-year vision.


Gavin Haynes is a journalist and former editor-at-large at Vice.

@gavhaynes