A scientist, searching for her blind spot. (Illustration by GraphicaArtis/Getty Images)


May 27, 2021   7 mins

You know the old joke: a man goes to the doctor and is told he only has a month to live.

“Surely not!” he gasps. “I want a second opinion!”

“Alright then,” says the doctor. “You’re hideously ugly, too.”

The misunderstanding arises because the doctor is arrogant enough to think her patient trusts her as an expert on multiple issues, when the patient was, in fact, worried about error. The doctor might have seen a positive test result for a killer disease and taken it at face value, without considering that the disease is vanishingly rare, so the test result was likely a false positive. That is, the patient might be concerned that the doctor’s judgement — because of her failure to consider the “base rate” of the disease — might have been subject to bias. In other words, skewed in a specific direction.

Alternatively, maybe the patient was concerned that the doctor had carelessly misread the test results, or even read those of a different patient. Another doctor, even one of similar skill, would be unlikely to make the exact same mistake, hence the request for a second opinion. So rather than bias, the patient might have been worried about noise: the tendency for human judgments to vary in unwanted, unpredictable and arbitrary ways.

The first type of error, bias, is well-known, thanks to the work of Daniel Kahneman, who is among the most famous psychologists in the world. As he chronicled in his mega-blockbuster popular-science book Thinking, Fast and Slow, Kahneman spent decades with his colleague Amos Tversky cataloguing all the ways human thinking can go off the rails: not just the “base rate neglect” that we saw above, but all sorts of other biases. These include “anchoring” — best explained by the sales move where a shop gives an item a super-high price and then gives you 50% off, though 50% of the price is higher than you’d have paid for it if you’d never seen that initial value. There’s also “framing”, where asking a question in different ways can affect people’s answers (would you choose to have surgery that has a “10% death rate”? What if I told you it had a “90% survival rate”?). For these and many other contributions, Kahneman remains the only psychologist ever to have won a Nobel Prize, in 2002.

There was, however, a certain irony in Thinking, Fast and Slow. Whereas the biases and heuristics that Kahneman identified have been borne out extremely well by subsequent studies, a good chunk of the rest of the book, where Kahneman talked about other scientists’ work, hasn’t. For instance, Kahneman devotes a chapter to a certain kind of social psychology study where barely noticeable “priming” stimuli are shown to participants in lab studies, with the intention of changing their behaviour. For example, one set of researchers claimed that showing people a screensaver with banknotes on it made them less likely to want to help a struggling student — because it “primed” the idea of money, and thus selfishness, in their minds.

Long story short: those studies were weak, and other scientists can’t find similar results when they try to re-run the experiments. There’s plenty of evidence for priming in language — people react faster when asked to decide which of “CHAIR” and “CHIAR” is a real word if they’ve just seen the word “TABLE”, compared to if they’ve just seen a word unrelated to furniture. But the type of priming study where a barely noticeable prime makes major, measurable changes to people’s subsequent actions? Not so much. And yet, here’s how Kahneman, in Thinking, Fast and Slow, summarised his views on that kind of priming research:

“[D]isbelief is not an option… You have no choice but to accept that the major conclusions of these studies are true. More important, you must accept that they are true about you.”

Perhaps there isn’t a name for the specific bias on display here, but it afflicts a great many scientists. They put far too much stock in individual scientific studies, which we know are subject to their very own array of biases and other errors, without double and triple-checking whether the results are robust. To his credit, Kahneman later issued a mea culpa, admitting that he’d gotten carried away about the priming studies, and should have been more sceptical.

Among the reasons that scientific studies can end up unreliable is simple chance. Numbers tend to fluctuate each time you take a sample (of people, or temperatures, or particle densities, or anything else), and the results in your sample might randomly be close or far from the “truth” of the matter. It’s all too easy, if you aren’t careful, to seize on a fluke result that’s very different from the “true” effect as you run your statistical analysis — even if there’s no bias at play. It’s all too easy, in other words, to capitalise on statistical noise. But a study suffering from too much noise fails to tell us about reality, and instead tells us more about the random quirks of its particular dataset.

Which brings us to Kahneman’s latest project. As we noted above, human judgements bounce around in the same noisy way as statistical samples: one doctor might give one diagnosis (a month to live!) to a patient, whereas another, given the same information, might say something quite different (you have years ahead of you!) Five separate judges might give five very different sentences to criminals who’ve committed the same crime. Different examiners might give hugely different grades to the same essay. And so on.

Teaming up with the legal scholar Cass Sunstein (of Nudge fame) and the management researcher and consultant Olivier Sibony, Kahneman has written a whole book on this phenomenon entitled Noise: A Flaw in Human Judgment. The authors argue that, whereas biases are regularly invoked to explain mistakes, far fewer people understand that errors also come about through sheer noise. We shouldn’t be too puzzled as to why we are biased towards discussing biases: they are much more fun to think about than noise (“aren’t people silly for not knowing about base rates!”) Kahneman has devoted almost his entire career until now to explaining them. His latest book is an attempt to redress the balance.

The three authors build a taxonomy of different kinds of noise, just as Kahneman and Tversky did for biases. They talk about “system noise”, where supposedly interchangeable human judgements end up being far more varied than we’d want. Some of this system noise is caused by bias: if some judges always give harsher sentences and some are always more lenient, they’re individually biased. But on aggregate they make the system noisy. As the authors write, it’s no good to say that on average a fair sentence is meted out if the system is routinely over and under-sentencing people. There might also be idiosyncrasies in how judges approach individual cases, and even the same judge on two different occasions might give very different sentences, not for a good reason but because they’re affected by the mood they happened to be in on each day.

Alas, unlike the menagerie of psychological biases, the different noise types tend to blur into one another — and the bland names they’re given (as well as “system noise” there’s “level noise”, “pattern noise”, et cetera) don’t do much to help. Despite its entertaining subject, many of the millions who bought Thinking, Fast and Slow might recall not reaching the end – it was something of a slog. Noise is similar. It purports to be aimed at the general public, but the style is drab and tedious (try not to let your eyes glaze over during the chapter called “The Mediating Assessments Protocol” — or even just when reading its title). If the book itself was a noise, it would be a drone.

Also dismaying is the discovery that Kahneman doesn’t appear to have learned the lessons from Thinking, Fast and Slow: he and his co-authors cite a few very unstable-looking studies, including a famous, but heavily criticised, study on Israeli judges giving harsher sentences when they’re hungrier (this one is also mentioned in the publicity for the book), and a very ropey-looking study about calorie labels on food packaging. On top of that, some statistically-minded readers have discovered some howlers in the book’s discussion of correlation and causation.

But dullness and sloppiness aside, are the authors correct in that main argument, about how pressing it is that we understand noise, and how underestimated an issue it is? Is Kahneman right to try to try to balance out his older research with this new focus? Yes and no.

On the one hand, their claims that nobody thinks about noise — Sunstein told one interviewer that “we think we’ve discovered a new continent” — are contradicted by the book itself. They themselves discuss decades-old noise-reduction attempts, such as when the US introduced mandatory sentencing guidelines in 1984 (which admittedly are now only advisory, having been found unconstitutional by the Supreme Court in 2005). Many other countries have various levels of mandatory sentencing, specifically in an attempt to iron out judge-to-judge inconsistencies.

On the other hand, there clearly is a lot of noise in many of our systems — and between different systems too, as the erratic response to the pandemic from country to country has shown. Anything that draws attention to the gap between the stated intention (systems that are consistent, on-target, and fair for all) and the actual outcome (systems that are not just biased but also suffused with unwanted randomness) is no bad thing. But many of the authors’ suggestions for dealing with noise — the aforementioned guidelines and checklists; taking the average of many different judgements; improving the skill levels of those doing the judging — are either crashingly obvious or already widely-used.

Still, introducing more people to the concept of noise, and what can be done about it, is worthwhile. It’s a shame you’d have to screw up your eyes, and skip a few of the draggier chapters, for Noise to serve this useful purpose. The subject deserves to be illuminated with good-quality studies and evidence, not the very variable — one might even say noisy — list of references that the authors adduce.

It’s always good to be reminded that the world is a complex, noisy, often-ironic place. Even complicated systems with rulebooks and procedures can produce unfair and inconsistent outcomes. Even well-intentioned people’s judgements can vary dramatically in unintended ways. Even pop-science books that earn seven-figure advances can be extremely boring. Even scientists who’ve spent their careers building good-quality evidence can lower their standards when discussing fields other than their own. And — as I’m sure Kahneman himself would be the first to admit — even world experts in human reasoning can make silly mistakes. It isn’t just at the doctor’s surgery where it might be worth asking for a second opinion.


Stuart Ritchie is a psychologist and a Lecturer in the Social, Genetic and Developmental Psychiatry Centre at King’s College London

StuartJRitchie