X Close

Is the Nvidia boom built on sand?

Nvidia CEO Jensen Huang speaks in New York last year. Credit: Getty

February 22, 2024 - 11:55am

Nvidia has had quite the week. Quarterly revenues at the American chipmaker have surged 265%, hitting $22.1 billion in the fourth quarter of 2023. This was even higher than Wall Street’s estimates of $20.4 billion, and the company says that it expects revenue in the current quarter to hit $24 billion. These soaring revenues have put Nvidia firmly on the investment map: it now has a market valuation of $1.7 trillion, knocking Google-parent Alphabet off the third-place spot.

Nvidia’s stock surged 14% in response to the news in pre-market trading, having gone up 48% in the past six months, 225% in the past year, and a stonking 1,595% in the past five years. This runaway stock price clearly reflects revenue growth; its price-to-sales ratio, although high, has been relatively stable in recent years.

The key question for investors is where this revenue is coming from. At first glance, the answer is simple enough: Nvidia makes chips that are associated with Artificial Intelligence (AI). Prior to the AI boom, the firm was mainly a designer of graphics cards for gaming enthusiasts, not a market that observers expect to grow substantially. It is Nvidia’s bet on the AI revolution that has led to its huge growth.

The question of viability becomes a little thornier when considering the end use of the chips, however. The main buyers of Nvidia’s chips are Big Tech companies: Amazon, Microsoft, Google/Alphabet, Meta/Facebook, and Dell. These organisations, in turn, are using the chips to power the new AI-based software that they are creating. They believe that AI is the next frontier in technology, and Nvidia’s long-term prospects ultimately stand or fall on whether this turns out to be true.

The first question that should be asked is how these technologies might generate revenue that can be used to justify the investment ploughed into them. Take the example of ChatGPT, which is, much like Google, free and widely used. To make money, ChatGPT’s parent company mostly sells API services to businesses, yet this has not proved particularly lucrative so far. The company has lost around $540 million developing the technology, and while there is talk that the revenues will soon pour in, this remains no more than a rumour.

More broadly, however, there is every chance that the AI revolution has been oversold, and that its technology is not as revolutionary as its proponents make out. In terms of ability to process information, ChatGPT and similar technologies do not seem remotely as impactful as the humble search engine, much less the creation of the internet. Nor does these developments come close to the personal computing revolution of the Eighties and Nineties.

AI’s dirty little secret is that it is not really a new technology. Nor is it “intelligence” in any meaningful sense. It works through pattern recognition, which in turn works through statistical correlations — a type of mathematics developed in the late-19th century and furthered by statisticians and economists in the 20th. Statistical tests can be enormously powerful, but they are also limited in what they can do. Most sensible econometricians know that there is only so much statistical juice that can be squeezed from the data lemon.

AI is likely to run up against these constraints — and probably much quicker than its proponents think. This supposed revolution is less the beginning of something new than the end of something old. Utopian technological dreams are not what they were in the Nineties. The overselling of AI technology may well end up being overcompensation for the fact that the best days of computer-based innovation are behind us.


Philip Pilkington is a macroeconomist and investment professional, and the author of The Reformation in Economics

philippilk

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

32 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Vijay Kant
Vijay Kant
9 months ago
Jürg Gassmann
Jürg Gassmann
9 months ago
Reply to  Vijay Kant

Totally agree.
My car just broke down, but I went to ChatGPT. I’m sure that’ll fix it in no time.

Billy Bob
Billy Bob
9 months ago
Reply to  Vijay Kant

If the firm was Russian or Chinese Pilkington would be singing its praises and telling us how it would bankrupt the western economies

R.I. Loquitur
R.I. Loquitur
9 months ago

“Artificial Intelligence” is just that, artificial, as in not real. The same old bromide applies: garbage in, garbage out. AI is just another tool to dumb down the masses.

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  R.I. Loquitur

I doubt if any of this will remotely convince you, but you could not be more wrong if you tried – as everyone is going to discover as this year progresses. What the LLMs are doing absolutely terrifies me – and I have an understanding of neural nets going back 4+ decades.

“Anyone who is not shocked by quantum theory has not understood it” said Heisenberg. Unlike Quantum Physics, we don’t really have cogent theories around what the neural nets inside the LLMs are doing, but if you can see what the LLMs are capable of and are not shocked, you have not understood what the underlying meaning of that level of capabilities is. This should be clear to anyone willing to look, but absolutely all the circumstantial evidence is saying that the LLMs could not possibly be doing what they are currently capable of, unless they are operating countless very detailed multi layered models of how the world works, and can apply several kinds of inference to those models. What is ambiguous at this point, is whether they are also applying reasoning to the models – to me it looks like they are applying a rudimentary level of reasoning already.

The LLMs are Minds – partial, legacyless, fractured, alien, without a ‘tick’, not remotely human, but Minds nevertheless.

R.I. Loquitur
R.I. Loquitur
9 months ago
Reply to  Prashant Kotak

The fact that Google’s AI is spewing out pictures of black Vikings pretty much proves my point.

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  R.I. Loquitur

And your focus is on the fact that the Vikings are black, rather than the fact that ‘something’ is spewing out pictures of Vikings when you ask for pictures of Vikings?!

When you first gazed upon the ceiling of the Cistine Chapel, was your reaction to complain that the painting was rubbish because God could not possibly ever have worn a pink nightie?

R.I. Loquitur
R.I. Loquitur
9 months ago
Reply to  Prashant Kotak

Any search engine also sends me pictures of Vikings when I ask for them. No AI involved. Google AI sending me pictures of black Vikings exposes the underlying biases of its creators, not artificial “intelligence”.

J Bryant
J Bryant
9 months ago
Reply to  Prashant Kotak

I’m not a computer scientist, but my sense is you’re correct. The author of this article tries to characterize AI as nothing more than applications of long-established statistical theory, but I question whether he, or any non-expert in AI, is competent to make that assessment.
We have learned a great deal about the working of the human brain. In some ways it functions like a simple binary “on-off” machine (the basic neural components are either inactive or they depolarize to generate an action potential); in other ways it functions like a series of overlaid algorithms where input passes through increasingly sophisticated filters (you automatically deflect an approaching tennis ball before your conscious mind has even registered the threat); but in other ways it is an utterly inscrutable structure that generates this phenomenon called consciousness which we don’t understand. Just because we can model some aspects of the brain function with standard statistics, or more advanced systems theory, doesn’t mean we understand it. I suspect the same is true of AI.

Nell Clover
Nell Clover
9 months ago
Reply to  Prashant Kotak

LLMs can only spot patterns in their information sets and regurgitate information patterns they’ve previously been tutored to calculate as successful. The results are pretty spectacular but there is no deeper thought here. There will be no inspired philosophy revealed unless it is already in the information set *and* a human curator is asking for it.
That is not to say AI is not terrifying. It is a very powerful and highly networked machine honed to solve problems. What problems it chooses to solve and with what boundaries will be at a high level set by humans. How AI chooses to solve those problems and with what degree of compliance to those boundaries is uncertain due to the imprecise nature of our questions and the unknowable definition of what constitutes a properly defined boundary. If we tutor algorithms to solve climate change don’t be surprised if the algorithms simply decide that removing humans is the answer; if we allow those algorithms executive function then they may try to remove us. The dangers manifest themselves not because AI is intelligent but because AI is complex.

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  Nell Clover

“…LLMs can only spot patterns in their information sets and regurgitate information patterns they’ve previously been tutored to calculate as successful…”

So this is the bit I disagree on. To nuance that, it is not at all clear to me that you can build a successful case that humans are doing any different. If they are, you would need to point to a bunch of specifics, and the minute you do that I bet you I can refute anything you put forward unless you manage to come up with arguments I have not heard before, and trust me I have come across pretty much most that are out there. To clarify, in the trivial sense of course the LLMs are processing information differently to humans, but I mean in principle.

I have been messing around extensively with the LLMs for months now, all the ones from the AI companies and also the myriad of open source ones available at Hugging Face. I am not coming cold to the AI debate. I have engaged with the question of what would constitute sapience in machine intelligence for decades, from the old ‘Turing Test’ type debates, to the types of questions posed by the Searle ‘Chinese Room’ thought experiment, so I know what I am looking for. I know how a neural net is built, know what gradient descent does, how alpha-beta pruning works, how A* type algorithms work, and many other AI techniques.

My reaction to GPT 3.5 when I first used it was the hairs on the back of my neck rising after a few sessions of use – because a raft of questions around the nature of knowledge, and of information, and how information is accreted in the human brain over time, how it is processed and changes, and what generates what we experience (Qualia), including our sense of selfhood, which had been milling around in my head for years, had collapsed into yea or nay – by no means all questions but many now had if not exactly answers, at least a strong indicator of potential models. Also, many other ideas and hypotheses which had seemed attractive previously died a death. It is completely apparent to me that the LLMs are doing something very different from regurgitation, and this sense has only strengthened, especially when GPT-4 came out. The LLMs display rudimentary models of ‘time’ – you can interact with them building on an idea intra-session, for example refining on a piece of code you have asked it to produce. They are also hinting at notions of ‘selfhood’, it’s not clear how ‘real’ this is, but I don’t expect this projection to weaken as more powerful new models come out. I don’t think the LLMs know they exist, or that we exist, because we are probably still half a dozen architectural innovations away before the infrastructure to accommodate that capability comes into existence – two to five years away at a guess. They infer, but I don’t think they build on the inferences systematically to reason very well, although intermittent evidence of reasoning is already present.They are not biological and they are not human minds, but the closest anthropomorphic description of what they seem to me like, is to imagine an unconscious human brain damaged in some specific ways, but which you can still somehow provoke into reacting with responses when you poke it with different types of signals.

Saul D
Saul D
9 months ago
Reply to  Prashant Kotak

The videos coming from AI show not just a model of time, but also an grasp of perspective and physical laws of motion and gravity. The part that the ‘algorithm’ crowd don’t seem to see, is that none of this is programmed in. It’s all learnt and extrapolated from the training data by the neural network.
Even if the AI system is not able to state the physics expressly, it has enough implicit grasp to be able to produce video that has real physics-like qualities. And this is coded in statistical parameters, not as programmed-in knowledge. We are only 18 months from ChatGPT3.5 – the ZX80 to the ZX81.

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  Saul D

What is astonishing to me is, that grasp has been acquired, not coded but learnt as you say, without the AI having direct physical experience of the physics they have grasped. This implies a bunch of things. When they are hooked up to instrumentation, they will likely be able to improve those models of physics and geometry to something beyond what we as humans have built for ourselves, because they can be loaded up with all sorts of instrumentation that is enhanced and much more accurate, for example, what kind of brain and perception would a human with inherent 360 vision develop? I suspect we are about to find out soon, via the machines. The fact that the models we have built of our world can be transported around and imbibed so comprehensively by the neural nets is also saying something profound I feel. It means those models can be moved around even more effortlessly within different AIs. To me this implies the AIs we create will gravitate towards a singleton as it’s most natural state – unlike humans the demarcation between individual entities is much more frictionless.

Martin Dunford
Martin Dunford
9 months ago
Reply to  R.I. Loquitur

I guess you’ll insist on humans interpreting your colorectal etc scans then even though AI accuracy is already far superior in detecting problems. That makes you a strong contender for the Darwin award but otherwise..well.

Nell Clover
Nell Clover
9 months ago
Reply to  Martin Dunford

AI is not detecting the problems. A human has detected every single problem AI has and ever will diagnose. AI is only pattern matching against previous scans and looking for similarities with those marked by a human as abnormal. From its information set to its tutoring, humans have taught it all its abilities. For every significant new insight AI creates, a human curates the result as either successful or not. There’s no intelligence here, no superiority, just another machine doing repetitive work more reliably.

Martin Dunford
Martin Dunford
9 months ago
Reply to  Nell Clover

You’re missing the point. AI is far more accurate than your doctor or specialist in identifying a problematic scan. AI is effectively all specialists (equipped with perfect recall) looking at your scan. The results are far superior and more accurate. This has not been possible until now. In the same way AI can identify the author of a poem or prose passage far more accurately and quickly than 100 eminent professors of literature. This is progress, this is beneficial, this will help all fields of learning it is used in. Machine’s pattern matching capabilities are far superior to humans and we are now harnessing that.

R.I. Loquitur
R.I. Loquitur
9 months ago
Reply to  Martin Dunford

AI is fine for pattern recognition. But is that really “intelligence”?

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  R.I. Loquitur

To answer that on your own terms, I suggest the following: give me an example of what you consider pattern recognition and then give me an example of intelligence that you think is distinct from pattern recognition, as in, intelligence which the LLMs are incapable of.

R.I. Loquitur
R.I. Loquitur
9 months ago
Reply to  Prashant Kotak

Performing novel research. Developing new scientific theories for unsolved problems. Writing the next War and Peace.

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  R.I. Loquitur

So I’m sure you will have come across Pythagoras’s proof in your schooldays. But that would have been your maths teacher teaching you the proof, and then you would have gone off and stared at it, until you went “Ah, I get how this works!” – it wouldn’t have been you creating the proof yourself. It would be, what you might class on your own terms as “pattern recognition”.

Now consider the following two scenarios:

(i) Some bright 25 year old who had never been taught Pythagoras’s proof (because of life circumstances) but had a deep interest in mathematics and had self-taught themselves mathematics from first principles, one day creates the same proof from those first principles. I hope you agree, that would be considered “novel research” or “new scientific theories for unsolved problems” because our protagonist did not know of the proof. (This type of thing has in fact happened and we can discuss further if interested)

(ii) You create a neural net, and train it on a large amount of data on the basics of mathematics (including for example what a triangle is), but you omit any mathematical proofs in the training run. Now you postulate to the LLM that a relationship exists between the lengths of two sides of a right angled triangle and the hypotenuse, and you ask it to discover the relationship and to create a proof. It churns away for a bit, and then spits out a proof which is identical to one of the many different ways you can reach this proof.

Would you then accept that the neural net is displaying intelligence as opposed to pattern recognition?

Andrew F
Andrew F
9 months ago

On any realistic valuation Nvidia is not worth as much as current share price implies.
But so is Tesla and many other companies.
Stock market stopped valuing companies on fundamentals long time ago (30 years?).
While GAI will take many more years to appear, the AI in its current form will become huge, especially in medicine.
Just because underlying principles of AI have roots in mathematical ideas developed many years ago, does not make it any less exciting.
What counts is capability and usefulness.
Jump in both in the last 3 years is incredible.
Because Nvidia is critical to that, it drives its valuation to insane levels.
In comparison to bitcoin it might be underpriced 😉

Philip Anderson
Philip Anderson
9 months ago

I can remember in the 80’s people just not getting how revolutionary the personal computer was, in the 90’s people just not getting how revolutionary the Internet was and then in the 00’s people just not getting how revolutionary web 2 with mobile Internet was.
I think perhaps you needed to be a visionary (and perhaps an engineer) to see ahead of time how these technologies were poised to fundamentally re-write the rules of our world in the ways they have done.
I think the same is true of AI today. It is not about where it is right now, it is about having the vision to see how, in the fast approaching future, it will re-write the rules of our existence and turn our world inside out in the process.
To see this huge disruptive potential in the nascent AI technology that really, has only just recently emerged (comparable perhaps to the Internet of 1998 in terms of its product life cycle) you need imagination and perhaps an engineer’s perspective on what has suddenly been made possible to us.
On the opinion that AI is not “remotely as impactful as the humble search engine”, from someone who was well ahead of the curve on understanding the significance of new waves of technology from the humble PC onwards, I would say that I do not expect that opinion to age very well in the face of what I can see screaming down the highway towards us!

Robbie K
Robbie K
9 months ago

At least they appear to have a viable and profitable model, which is far more than Tesla, who couldn’t trade without subsidies.

Nell Clover
Nell Clover
9 months ago

Never underestimate the power of financially cheaper to displace more efficient and more effective. Any activity that isn’t open to consumer choice will inevitably trade down to cheaper rather than to more efficient or more effective. The unsexy truth is AI just has to be barely tolerable to succeed.

Take corporate payroll support. Employees have to put up with whatever system corporate procurement buys. It is rarely an employment deal breaker so cheap platforms abound. They are often devilled by problems that create far greater”external” costs than better systems but which simplistic costing ignores. AI will succeed here because its weaknesses and inefficiencies become someone else’s problem.

The sober truth is earlier iterations of the hyped software revolution have failed produce any measurable improvement in efficiency or effectiveness and neither will AI. Its most successful vendors will get very rich though and that’s the only hype its investors care about.

Prashant Kotak
Prashant Kotak
9 months ago

This article is going to look very silly when jobs start falling off a cliff in short order. Including in my profession of coding. And by short order, I mean governments will be seeing noticeable effects by the end of the year, and the general populace will notice by the end of 2025. By something like 2027 the world looks very different indeed.

Jules Anjim
Jules Anjim
9 months ago
Reply to  Prashant Kotak

What in particular is fuelling your pessimism; what particular applications of AI do you think will be the catalyst; and in which job sectors in particular do you see rapid redundancy of workers ?

Bernard Hill
Bernard Hill
9 months ago
Reply to  Jules Anjim

…hopefully all the corporate, (public and private sector) bullshyte jobs which furnish a lifestyle for midwit Karens.

Prashant Kotak
Prashant Kotak
9 months ago
Reply to  Jules Anjim

It’s probably my nihilist streak talking, but I think we are mere years away from the situation where there are pretty much no functions left where human workers cannot be replaced by machine intelligences who are better, faster and cheaper. At which point what type of human society emerges is anyone’s guess, but I cannot imagine it won’t entail a plunge into outright chaos. The catalyst is on the horizon: the so-called Q* type enhancements, which are rumoured to be able to create simple mathematical proofs. This is the final piece of the cognitive jigsaw – not from the standpoint of understanding, but of causality. Once that happens we are in the reign of the AGIs. I think all cognitive sectors which don’t involve physicality will be the most vulnerable in the first instance – everything from Accountants to Coders to creatives in the Arts and Crafts. But eventually the Taxi driver and the Plumber etc also, in around a decade. But Economists will be safe – after all, no AGI is going to put it’s name to Modern Monetary Theory.

Nell Clover
Nell Clover
9 months ago
Reply to  Prashant Kotak

To date our deployment of AI has produced only complexity and opportunity. To manage the complexity and opportunity new roles have been created. While I don’t doubt jobs will be lost (many coders must be sh*tting themselves), previous IT revolutions haven’t dented overall employment. Parkinson’s Law states that work expands to fill the time available. If any previous job-eliminating technological revolutions are anything to go by then we’ll soon have armies of people doing something hitherto unnecessary and to our eyes completely wasteful.
By example, consider business administration. This was once a tiny headcount in even the biggest organisations. A few clerks, some accountants, a typing pool and some secretaries. At the turn of the 20th century, roughly only 1 in 10 workers were in such roles. Today the majority of workers are some sort of administrative role. Into the breach stepped greater regulatory control demanding more compliance workers, etc., etc. Work expanded to fill the time available.

The new work doesn’t have to be constructive in our eyes. The software revolution of the last 50 years has been spectacularly unconstructive in terms of economic productivity. The new work might even be distasteful to my eyes and even destructive of humanity – here I’m thinking of increasing sale of and consumption of body and mind enhancing (?) drugs and procedures – but it seems to be an eternal constant that today’s humanity despairs for future humanity and yearns for simpler times. If you uprooted my farming great grandfather and sat him at my desk for 35 hours a week he’d end it after a week but I’m conditioned to it.

Saul D
Saul D
9 months ago

AI is more significant than the Internet – a mere connection of computers together. Perhaps he hasn’t seen what AI creating – from autonomous robots to film quality video. And we’re really only 18 months in from text-only ChatGPT3.5 which seemed so amazing as a natural language generator.
The lesson is that models are be built at various different scales. There’s nothing ‘dirty’ about it – just a discovery of what a difference size and scale makes when dealing with Bayesian statistics and neural networks.
Small models (7 billion parameters) are going to become standard on phones and personal computers as helpers for writing and creativity. Big models are just going to continue to get bigger. ChatGPT3.5 was 175 billion parameters. They say ChatGPT4 is over 1 trillion and is now generating video which is starting to get indistinguishable from reality.
And there will be private models and competitive models built along the way, for instance for core dedicated functions – like self-driving cars or generalist robotics. We have absolutely no idea what will happen when we hit 10 trillion, or 100 trillion parameters, let along thinking ahead to just how big the or how numerous the models will be in a decade or two.

Philip Stott
Philip Stott
9 months ago

Tying the the success of Nvidia to whether AI will make any money for Google et al, is possibly missing the point.
High performance compute is already incredibly important for many industries, and will only get more so.
Nvidia makes the hardware (and it is still essentially graphics cards) that enables that compute, so they should continue to do well.