A decade ago, Amazon seemed to be ushering in a beautiful future. Its golden child, Alexa, was one of those moments that Arthur C. Clarke famously described: where a new technology becomes “indistinguishable from magic”. It appeared to herald a new age of artificial intelligence. But that was a whole earlier generation of technologies. Today, Alexa feels like a tired novelty — the technology never evolved as Amazon had envisioned, and the company has been forced into a game of catch-up. Now, with news that it is buying its way into the OpenAI ecosystem with a $10 billion stake in the emerging powerhouse, the strategy has taken a new turn.
That $10 billion buys some shares, of course. But more importantly from Amazon’s perspective, OpenAI will be using Amazon’s in-house chips on some of its output.
Amazon manufactures two chips via an in-house startup, Annapurna Labs. One, Trainium, is designed specifically for the heavy lifting of training AI models. Training chips are state-of-the-art and remain the area where Nvidia has a major technical advantage. But its second, Inferentia, is designed for user queries.
Inferentia specializes in queries where the big AI producers have become the most price-sensitive. In very loose terms, it costs about $4 an hour to run a top-end Nvidia chip. That isn’t much, unless one receives hundreds of millions of queries a day. In the future, as these chips take ever more strain, and as the cost of electricity remains static, it will be those marginal differences in price per query that drive big chip purchases. On that score, Amazon has a stable, low-energy chip that is hardly the best, but which keeps on trucking.
The story that Amazon wants to tell from this deal is that “OpenAI uses our stuff.” That allows the company to sell to product managers across the globe and crank up the revenues. It also supports the kinds of chip sales volumes it will need to finance the R&D for the next generation of chips. And it means Amazon will be learning more than ever in the process, given the sheer quantity of users OpenAI has.
Put that together with what the organization already does with AWS — sell computing time rather than computing chips — and you have a powerful back end for any business. A company with a huge global footprint, rock-solid infrastructure, and proprietary chips sounds seductive to most IT managers.
In a sense, this is everyone’s strategy now. AI is increasingly turning into a monster when it comes to infrastructure pipelines, and no one wants to end up beholden to anyone. Nvidia’s total dominance of this phase, and frustration with its rollout, has accelerated that decision for many companies.
The biggest corporations — those who can compete with the billion-dollar cost of training a model and the multi-billion-dollar cost of chip development — are integrating their own workflows right back to the factory, and walling off from the supply chain. Meta is making its own chips; Google now chiefly runs Gemini on its own processors; Apple launched Apple Silicon five years ago.
Nvidia’s head start won’t be obliterated, even by a company as big as Amazon. In five years’ time, it will probably still be driving the market. The overall pie will continue to grow, bloating its balance sheet. But potential work that might have gone to Google, Meta or Amazon will have been stealthily brought in-house. And Amazon will likely be selling cloud services at a price no one else can match.







Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe