You might think that the Nobel Prize for Physics would go to a physicist. Not this, year though. As usual, the prize was shared earlier this week, but both of the winners were computer scientists. It’s as if the Olympic gold for the 100-metre dash had gone to a cyclist.
It should be said that the two laureates, Geoffrey Hinton and John Hopfield, are highly distinguished in their field. However, that field is artificial intelligence (AI), which is not usually regarded as a branch of physics.
So quite a slap in the face for the physicists. But, then, the very next day, came a second slap — this time for the chemists. The Nobel Prize for Chemistry was another shock win for computer science. One of the three winners, David Baker, has a background in biochemistry, but the other two — John Michael Jumper and Demis Hassabis — are leading AI experts.
Does that mean that the great AI-hype wave swept away the Nobels too? Or could it be that the prize-givers have simply relaxed their criteria to recognise genuine scientific achievement?
There’s a precedent for the latter. The third scientific Nobel prize is for Physiology or Medicine. Nevertheless, in 1973 it went to three zoologists: Karl von Frisch, Konrad Lorenz and Nikolaas Tinbergen. It’s a stretch to describe their work on animal behaviour as either physiology or medicine, so clearly allowances were made.
It would seem that similar levels of flexibility were shown this year. There’s no doubt that Hopfield and Hinton are foundational figures in machine learning. They deserve to be honoured, and given there’s no such thing as a Nobel prize for computer science, the physics prize had to do.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThis is a difficult point and I have an ongoing discussion with my neighbour about the same type of thing: when did Sociology become Social Science? When computers got to handle all the data.
Today, apparently, you don’t have chemists slaving away over a hot Bunsen burners or using litmus paper to measure acidity. Today you write a computer model to predict what colour the litmus paper will become.
So, what about intuition? You can have intuitive thoughts but the computer model will tell you not to bother and to pack up for the day.
When the models get so complex – the climate predictions, for example – who knows that the model is correct? Can clever computers write their own models so that they must be correct? Can computers have a bit of a joke, tell us to build lots of windmills, get rid of our cars and then say, “April Fool!”
The costly debacle of climate science should serve as a warning about the risks of shiny new objects. Instead climate hype is a major global industry.
It’s worse than that. The BS about the pandemic and global warming have become ‘truth’ because their underlying lies have been repeated so often.
It seems the climate zealots practice their favorite debate tactic here at Unheard: just declare that they disagree with as double unhook and try to have the comment rejected.
Please rephrase that in English.
Sociology became a science (in name only) when respectability was craved. Rather than introducing rigorous scientific methods, as practitioners generally lacked sufficient mental acumen to apply scientific methods, they resorted to propaganda – calling it a science.
Good point. But the problem with sociology and similar “sciences” is not that practitioners lack the acumen to apply scientific methods, it’s that scientific methods don’t work for them. They have no way to do experiments, even by causal inference.
The scientific method was originally invented by Descartes. He based it on the cognitive processes involved in proving mathematical theorems. It is rigorous logical deduction from stated axioms. Experiments are intended to support or disprove logically derived hypotheses. Those who are not highly trained in pure mathematics have no real understanding of what it is.
Still, CD has a point. Though I’m not sure if you’re agreeing or not. How can practitioners prove, through science, anything about sociology? What methods can they apply?
the scientific method as defined by Descartes is more about the rigour of the method. It is based on the assumption truth is entirely consistent and coherent. The insistence on consistency and coherence automatically results in objectivity. Feminism, gender ideology etc. are riddled with inconsistencies. Piaget was a pure mathematician first and then a psychologist. He applied the rigour of mathematics to psychology and is considered one of the greatest psychologists. Experiments are secondary to theory: think Newton, Einstein etc. Quantum mechanics doesn’t just use mathematics – it is mathematics – and to a great extent impossible to understand in worldly terms.
Interestingly, this year’s Nobel prize went to AI practitioners. Computers are mathematically based and apply the mathematical method. In the past the greats used their own brains: pi was calculated to 527 place in the 19th century without a calculator. In the past people memorised stories and myths perfectly. Plato in 400 BC bemoaned the advent of writing on the basis if people could read things they would not develop their memories.
I often see comments complaining about the lack of logic and rigor in our day and age. However, the scientific method itself is not trivial, it has evolved and is quite extensive.
Early methods were mostly based empiricism: true knowledge has to come from the senses. However, inductive and deductive reasoning – usually based on reason, math and logic – were found to be powerful methods as well. Even independent of direct observation.
Early enlightenment thought around the time of Newton and Descartes indeed assume that the universe works like a clockwork that could, at least in theory, be completely understood. However, since then a lot of criticism on the enlightenment view that rigor and reason can be universally applied has appeared. Arguably sometimes taking it too far. However, we do know now that reality is more unruly. Logic itself was shown to have intrinsic limitations by Gödel. Also the role of language on (a priori) knowledge has been extensively discussed by thinkers such as Wittgenstein, the structuralists and post-structuralists.
I wouldn’t say quantum mechanics is pure mathematics because – contrary to string theory, for example – much of it can be experimentally verified. We often say that we ‘understand’ things if we find some analogue in our daily lives that “works just like that”. QM often does not have such an analogue but that is merely an inconvenience.
There is of course an ongoing discussion about how ‘soft’ science can be for it to be still considered science. And also if the scientific method is applied in places were it is overstating its ability, where it is essentially ‘bluffing’. This is continuing to this day I think. Science is also cumulatieve of course. Newtonian mechanics is not wrong but it is inaccurate and incomplete as was shown by Einstein’s relativity. This changed much of the fundamentals of classical physics. But not so much that we don’t teach it anymore, for most engineering applications it works perfectly fine.
One of the latest big contributions to the philosophy of science came from Popper who argued that for something to be scientific it has to be at least falsifiable.
Thank you for your detailed and thoughtful response, one question though: is quantum mechanics falsifiable? I completely dispute true knowledge comes through the senses. I believe it comes through the intellect but maybe you are a product of contemporary materialism. I am well aware there is a contradiction within mathematics but for the most part it is consistent and coherent. Newtonian mechanics is ‘true’ to the extent it describes what is happening locally.Relativity describes what is happening on a macro scale and quantum mechanics describes what is going on at a micro level.
Yes quantum mechanics can be tested and is thus falsifiable. What actually happens, for example, with the so-called collapse of wave-function is a different question. This is where we still go from physics to metaphysics and philosophy. But many experiments agree with the models even though the results can be bizarre.
I would say that mathematics is by definition coherent. But it is not some absolute truth as it is it based on axioms that are sometimes recursive. The precise meaning of logic and math, and its relation to reality, remains therefore subject of philosophical debate. However, it is evidently an extremely powerful method.
As for Newton, the effects of relativity are always there also on a small scale. The effects are just really small so you typically do not notice them. You’d have to measure extremely accurately to see them. Now, I’m sure you are aware that QM and relativity do not play nice together. So something is not well-understood there.
Newtonian mechanics as a model is useful as it works so it is true in that sense. It may not be true in the sense relativity is more accurate but that is not particularly relevant. When eating an apple one doesn’t consider its atomic makeup. I am not particularly au fait with quantum mechanics but am under the impression it is underpinned by probability theory rather than certainty. It is very unlikely a person will win the lottery but clearly not impossible. If quantum mechanics relies entirely on probability, it cannot be falsifiable as any event however unlikely is possible.
Yes Newton is obviously a well-established and usable scientific method. Perhaps still the most generally applicable together with classical electromagnetism. Technically one could say that science is not really in the business of finding what things actually are on their most fundamental level. It approximates reality, whatever it is, through models, ideally with increasing succes. What things are in themselves, the ontology, remains a philosophical exercise. Paradoxically we can deduce that it is impossible to ever arrive at the absolute truth since we are ‘trapped’ in our minds. So we always have to make assumptions, such as that the external world exists (in some dual state) in the first place. That is more or less where Descartes arrived at with Cogito, ergo sum. Although he tried to prove the existence of God as well but, in my opinion, he failed. I think that at this point one also has to look at pragmatism: yes everything can be doubted but it unpractical to treat all knowledge as equally useless because of this.
It is not correct to suggest that because quantum mechanics is (largely) probabilistic it is not falsifiable. Quantum mechanics is probabilistic but not completely random. In other words, it agrees to certain well-described statistical distributions and we find these distribution with high accuracy by repeating experiments. So we find the specific probabilistic nature with certainty and also the probabilistic nature ‘collapses’ upon measurement. Now, a question might be if the probabilistic nature is intrinsic to reality or a shortcoming of the theory. Einstein and many other didn’t like it and suggested that there might be ‘hidden variables’ (see the EPR-experiment). However, as things stand it seems that reality is fundamentally simply probabilistic, although there are many interpretation trying to explain why this is.
Statistical experiments can easily be done with large number of events measured. So, yes, probability can be checked experimentally.
Toss the coin a few million times.
Probability theory is internally inconsistent. According to probability theory, the probability of a particular set of numbers being drawn in a lottery is exactly the same each draw so there is no point in keeping the same numbers. According to probability theory, eventually every single set of numbers will be drawn so the best strategy is to keep the same numbers. Newtonian mechanics is internally consistent and coherent which for Descartes would have been a measure of truth. It may not be a perfect match but describes local events in such a way that it is both functioning and useful. It has great predictive power. It is also comprehensible. I know too little about quantum mechanics to comment on it but it does appear the universe to a great extent can be mathematically modelled. Are there any internally coherent and consistent theories in sociology derived from stated axioms. Defining human nature would be a good starting point, followed by the extent to which nature and nurture affect behaviour. The Bible is not written in scientific language but the understanding of humanity in general and human nature in particular is profound. Sadly, the reading of profound literature is no longer valued in western society and its reading has been almost entirely removed from schools. Previous generations would have attended church where they would have listened to readings from the Bible and learnt large chunks by heart at school as the development of memory and ‘improving’ the mind was highly valued. Jordan Peterson’s lectures on the Old Testament are incredible and available on youtube. He identifies how the story of human nature is essentially recounted.
Sociology is just a process for reinforcing your own prejudices dressed up as a pseudo-science so you can use it a stick to beat people with.
Sorry Aphrodite, if the ‘logically derived hypotheses’ do not agree with the experiment, you know which one has to go.
It doesn’t matter how beautiful your theory is, it doesn’t matter how smart you are. If it doesn’t agree with experiment, it’s wrong. In that simple statement is the key to science. – Richard Feynman
I don’t dispute that. Richard Feynman was against hanging onto useless theories. A tendency amongst older academics and practitioners who cannot grasp the new. Perfectly understandable really. Newtonian mechanics is not a useless theory and I have absolutely no doubt that Richard Feynman would never have suggested abandoning it, but that does not mean more comprehensive theory with greater predictive power should not be sought.
a response to Feynman’s quote (not mine) ‘Feynman said that once.
I do agree with this to a certain extent. But consider this – the wave theory was unable to explain things like the Photoelectric Effect and Compton Scattering, whereas the particle theory (by which I mean the existence of photons) couldn’t explain interference and diffraction.
Well, aren’t both the theories wrong then?
Ideally, there should be one theory that explains all of this. Then why do we take both of these to be true?
Isn’t climate (weather) prediction famously non-linear, and therefore complex? Butterfly-effect and all that …
There are many ways to acquire and test knowledge. Not every method is equally robust. Almost no experiment tests its hypothesis with 100% certainty but some experiments are close to that while others are not. Some methods cannot be tested or repeated at all, or only to a limited degree. Then the question arises if it is scientific at all. Moreover, there are a lot of pitfalls such as confounding variables and all sorts of biases. Correlation is not always causation.
A good scientist knows about this and what the limitations of their methods are and should discuss these limitations extensively. The problem is that, even if they do, journalist, policy makers and thus the general population will often only look at the end result without any of the context.
It’s interesting that when relying on computer models alone, scientists have found that predictions have tended to be too conservative. They are accurate with respect to average changes, but tend to under-estimate the extremes. I’ve come across scientists making this observation many times over the past few years; it’s become consistent.
Here’s an example of recent evidence of this tendency from a study published January 2024:
div > p:nth-of-type(5) > a”>https://journals.ametsoc.org/view/journals/clim/37/1/JCLI-D-23-0492.1.xml
If the Britanica definition of Physics is “the science of matter, motion, and energy that deals with the structure and interactions of the observable universe”, then the award should go to the Democrat Party for convincing almost half of the population that Kamala Harris should become the President of the United States, despite having immense observable deficiencies in subjects that matter.
Tut tut, no need to politicize science. Leave that to Scientific American, which has endorsed Kamala Harris because (among other reasons) her platform “increases tax deductions for new small businesses”.
Science, or sciencey stuff, has been politicized as far back as one wishes to look. Eugenics was considered settled science and embraced by tge best and brightest for decades. And ked to predicted, tragic results. Biology in the USSR was hijacked by aparatchik scientist Lysenko, which criminalized Darwinian evolution. The most educated nation in Europe , Germany, embraced the banning of “Jewish science”, and rejected the theory of relativity. And then we have climate science, strongly influencing public policy while suppressing evidence that doesn’t support UT adherents beliefs. So wisecracking about the democrat party machine is mild by comparuson.
These are the same Nobel people who saw fit to honor the likes of Barack Obama barely a few months into his presidency, so this new development should not be surprising. Also, it’s not hard to recall that the use of Ivermectin on people won a Nobel some years back, before the drug was more recently recast as a horse dewormer.
Singer-songwriter Bob Dylan receiving the Nobel prize for literature also comes to mind. And psychologist Daniel Kahneman winning the “Nobel” prize for economics. And client alarmist Al Gore winning a peace prize.
There’s no rigor to the awarding of Nobel prizes by these ersatz committees. The prizes are not meaningless, but they are not real meaningful either.
No, it is definitely not the same people, since the different prizes are awarded by different people. While there are the same monetary source for the prizes (the wealth left behind by Alfred Nobel), the people who decide the winners of the prize are experts in their fields. This is even more true regarding the Peace Prize, which are awarded by Norwegians, not Swedes as the other Nobel Prizes.
Experts in their fields?
Yes for Nobel Prizes apart from literature and Peace.
Giving Nobel to Obama was total disgrace.
Well presented and insightful. The “distraction by the latest shiny objects” is an old human trait. Managing that trait is vital, especially now.
I stopped paying attention to the Nobel Prizes when they gave peace to a US President who then went on to launch the world’s largest drone strike campaign in human history.
Well, you could have stopped paying attention to the Nobel Prizes in 1906 already, when they gave peace to a US President who played previously as military imperialist with the “Anschluss” of the Philippines and after that, with the extortion against Colombia and provoking the secession of Panama from it to consolidate us dominance in the region. Why didn’t you?
Let’s get the views of an actual physicist, Sabine Hossenfelder, on this, rather than the boy Franklin’s.
div > p:nth-of-type(2) > a”>https://youtu.be/dR1ncz-Lozc?feature=shared
She’s a funny old bird, though. Gets at some good points, but always stops short of reaching conclusions her Germanic brain can’t cope with.
Yeah, she has a funny accent too.
To be fair, Franklin has the Nobel for Nit-Picking.
Are the stars small lights on an inverted bowl over us (with a few wanderers controlled via cog-like mechanisms)?
Or giant nuclear fusion furnaces billions of miles away?
The first explanation served us well for hundreds of years enabling navigation and timekeeping etc.
But as measurements became more accurate the number of cogwheels required and complexity increased, it became hard to cope with, a new paradigm was required.
With AI our ability to handle ever more complex models, some have billions of parameters is increased, and hence the need to reconsider the basis of what we are doing is decreased.
Some say science seems to be in a bit of a crisis, most innovation has shifted from material to the abstract digital world, e.g. data, AI etc. Meanwhile, the most fundamental science, physics, seems to do something similar: endlessly rehashing purely abstract models that cannot be tested.
Before blaming the limits of our mental capacity I would suggest that we simply do not have the same fire for true innovation as compared to the 20th century. The world- and cold wars incentivized a lot of technological and scientific progress. As the cold war came to a conclusion the politics and economic consensus significantly shifted. From this point Cauwels et al., for example, observe a decline in the physical and life sciences in a 2022 paper.
Perhaps it is a coincidence but this is the time that market fundamentalism and financialization slowly but surely bureaucratized universies and the infrastructure around grands. Now, heterodox thinking is often discouraged even in the hard sciences. And the academic environment of relatively low pay and unstable employment is unattractive for talent as well. On the flip side, big tech seems a perfect match for our speculative PR- and hype-driven economy. But how much of it will turn out to be actual progress in the end? Or is it a sign that we are in something of a postmodern dark age?
Progress in science is like the progress in the fastest running of the 100-meter dash. When the times were slower the improvements were larger, but as the times become faster the improvements are smaller. They seem to be asymptotically approaching a limit.
Physics and chemistry have been improving their theoretical understanding for many years, so the improvements are smaller than in past days.
AI, a branch of mathematics, is in its early stages, so fast improvements.
It sounds somewhat ‘logical’ but a narrative like that is not actually so easy to prove. Ted Modis did some work on the “rate of change” in knowledge and argued it follows a similar pattern as entropy (its derivative to be precise). Because of this you get exponential improvement half way the development of a certain area but not at the beginning and the end. However, there is one thing: paradigms shifts. One unforeseen revolutionary finding can open the door to entirely new possibilities for research. I think that is the reason why at multiple points in history – directly after Newton, for example – we believed to have discovered much of what could possibly be known. Only to find that we didn’t know anything yet.
AI makes the inner workings of nature more impenetrable the more the technology grows in power. Alienation on stilts.
AI, ML, algorithms whatever you want to call them are not standalone computer programs you load, push a button and out comes the answer, “42”.
There is a ton of work to develop models and subject matter expertise that’s needed to get interesting conclusions from.
What you put in you really do get out. That said, we can at best think in 4,5, maybe 6 variables on a certain day, if you are lucky. These systems can crunch an enormous number of variables.
I just don’t think they could have done what they did without significant subject knowledge.
As a former chemist, the award of the chemistry Nobel to two AI scientists doesn’t trouble me at all. Science is built on the application of new ideas and, very importantly, new technologies to existing fields.
If it helps the author at all, John Hopfield graduated in physics (not computer science), and Hinton graduated in experimental psychology.
One internet definition of chemistry, which I think is helpful, is “the branch of science that deals with the identification of the substances of which matter is composed; the investigation of their properties and the ways in which they interact, combine, and change; and the use of these processes to form new substances.” The current Nobelists applied new techniques and ideas to address these questions, just as the insights of physicists like Sommerfeld and Dirac led to quantum mechanics which helps explain the nature of atoms and bonds (and perhaps even the biological phenomenon of consciousness).
I am only grateful that these Nobels didn’t go to woke scientists for their work elucidating how racially-fueled power structures contributed to our flawed understanding of reality.
AI knows nothing.
The prize winners are being honored for guiding the models to produce intellectually useful output.
I am a software engineer, an occupation popularly thought to be extinct due to AI.
I don’t believe this is true, in future we will have AI prompt engineers instead, trained in the art of providing rigorous prompts (or specifications) for AIs that only people with a software developer mindset can provide, that do not have ambiguous edge cases.
I think even proper scientists will need to employ a prompt engineer to translate their ideas into usable input for an AI.
I write as someone who once worked with an ex-CERN physics PhD to optimise a chemical and power plant (Wilton International) to meet its power and high-pressure steam obligations, on the lowest possible power budget. He produced a mathematical model, and I turned it into code for him, but it still required a narrative. It is possible that an AI could have ingested the model just as equations, but I think we’re many years away from that.
That said, it would be nice if we could have Nobel prizes for computer science, without detracting from the achievements of the natural scientists.
The real problem starts when the Nobel Prizes are awarded to the AIs….
Probably terrified another Jew would win.
Hinton has a partly Jewish ancestry. Also amongst his ancestors is George Boole.
If it was awarded to a traditional scientist, who would the top candidates be?
I’m not familiar with the players at the cutting edge of Physics or Chemistry these days.
Blimey, can’t see the logic in that.
Very true. Well stated.
Exactly. This year’s winners should be in the Mathematics category, which is what AI and computer science are about.
When Oprah Winfrey wins a Fields Medal you’ll know the game is up.
Like everything else in the last 20 years, the Nobel Prize organization has been bought by the “elites” for their political and business objectives. The prize is a nothing burger and certainly is not awarded for merit. Another one bites the dust!
There is already a widely recognised award for computer science – the ACM Turing award, sometimes referred to as ‘the Nobel prize for Computer Science’. The Fields medal for maths is similarly regarded.
It does seem as if the impact of computer science has become too significant for the Nobel ctte to ignore. But hijacking existing prizes does not seem a proper way ahead. Let’s hope that the ctte finds a better way to recognise Computer Science (and maths)
How long before the prize goes to AI.
Have the a Nobel people decided to give themselves some latitude in a way that they shouldn’t?