The Dunning-Kruger effect is one of those psychological findings that has a life far outside the confines of academia. It’s the idea that people who are stupid, or incompetent at some skill, usually think they’re cleverer or more competent than they are, because they’re too stupid or incompetent to know how stupid or incompetent they are.
A certain kind of person (um, I may have mentioned it once or twice) absolutely loves it. It gets trotted out every few minutes on Twitter, usually in the service of calling people stupid when they disagree with you politically.
It’s a nasty but apparently scientific little put-down, perfect for almost any online debate. David Davis says something stupid about Brexit? It’s not just that he’s wrong — it’s Dunning-Kruger! Trump saying something insane about Covid-19? It’s not just that he’s wrong (and totally misinformed and possibly mad) — it’s Dunning-Kruger! And its use is not limited to random people on Twitter — it’s an absolute go-to across the media.
It wasn’t created as a weapon for internet fights: it was meant to explain a real-world phenomenon. Its discoverers were apparently inspired to look for it by that story of a bank robber who covered his face in lemon juice, thinking that it would make him invisible, because lemon juice can be used as invisible ink.
But, in my experience at least, it has now mainly become an insult. And such a useful one, because in one, simple, clever-sounding phrase, you can simultaneously call someone wrong and stupid and too stupid even to realise that they’re wrong. If the Dunning-Kruger effect didn’t exist, then someone would have to invent it.
It’s worth noting two things, though. First, a lot of the time, when people talk about it, they get it wrong. Secondly, and perhaps more importantly, it’s looking very plausible that Dunning-Kruger doesn’t exist — or at least is much less interesting than most of us think.
Let’s talk about that first point first. Usually, people interpret Dunning-Kruger as meaning that stupid people think they’re actually geniuses. But that’s not what it’s about.
In their original 1999 paper, “Unskilled and unaware of it: How difficulties in recognizing one’s own incompetence lead to inflated self-assessments,” David Dunning and Justin Kruger noted that one’s actual ability at something (they looked specifically at the ability to be funny, to use correct grammar, and to understand logic) was only loosely correlated with one’s assessment of one’s own ability.
But more than that, they found that the gap between the actual ability and the self-assessed ability was greater in people with low actual ability. That is: people who are bad at something, are also bad at knowing how good they are at that thing. Specifically, they tend to think they’re better than they are.
This was, they said, because “their incompetence robs them of the metacognitive ability to realise it”. That is, they were too bad at the task to know what doing well at it would look like: they were so unfunny that they couldn’t recognise how unfunny their own jokes were, and so on. Or — let’s face it, this is what we’re all thinking — that people too stupid to understand Brexit or Covid are also too stupid to understand that they don’t understand. That is the Dunning-Kruger special sauce: the theory they use to explain the observed facts.
But it’s not the same as saying that the stupidest people think they’re all Einsteins. Just that they get their ability more wrong than the cleverest people do. On average, the least competent people rate their ability as lower than the most competent people rate their ability. They just rated it higher than their actual ability. In short, people think the Dunning-Kruger effect is this graph, but it’s actually this, much less dramatic, graph.
It may be getting worse for Dunning-Kruger, though. A paper out in the journal Intelligence suggests that it may not exist at all — or at least that it may be explained in large part as a statistical artefact. The paper implies that two well-known phenomena — the “better than average” effect, and “regression to the mean” — could explain it, perhaps entirely.
[Note: insofar as I understand what follows, it is thanks to the marvellous Kevin McConway, professor emeritus of statistics at the Open University. All errors are entirely my own.]
Regression to the mean goes like this, using IQ as an example. Imagine that everyone’s real IQ and their self-assessed IQ were precisely the same: someone who estimates their IQ is 99 would have a real IQ of 99; someone who estimates it at 100 would have a real IQ of 100. If you draw them on a graph, with the objective IQ on the X (across) axis and the self-assessed on the Y (up) axis, the line would be a perfect 45° angle: if it’s 99 on your X axis, it’ll be 99 on your Y axis; same for 115 or 87 or anything. They would be perfectly correlated.
We know, though, that the two aren’t perfectly correlated. Your line will not be exactly at 45°; in fact it will be shallower, because when your line goes one to the right, it will, on average, go less than one up. The two extremes will be closer — will regress — towards the mean. What that means is that people will guess themselves closer to the average than they actually are. People who have real IQs above 100 will, on average, underestimate their scores; people who have real IQs below 100 will, on average, overestimate them.
Now, we bring in the “better than average” effect. It is exactly what it sounds like: most people, when asked to assess their ability at some task, will estimate themselves to be better than the average person.
To use the same example: on average, people estimate their IQ at 115. However, since average IQ is literally defined as being 100 — if people start doing better on the test, the scores are recalibrated so that the average remains 100, which is exactly what happens in the Flynn effect — that can’t be right. On average, we overestimate our own intelligence.
But look what that means for the Dunning-Kruger effect. That line we drew on our imaginary graph when we were talking about regression to the mean — it’s still the same shallow slope, but now it’s higher up the graph. The average person’s IQ is still 100, but the average person estimates their IQ at 115.
As you move to the right — as actual IQ scores go up — so do estimated IQ scores, but more slowly, because regression to the mean keeps the slope shallow. So as you move to the right on the graph, people’s assessments of their own IQ get steadily less wrong. As you move left, the real scores drop off rapidly, but because of regression to the mean, the self-assessed scores drop more slowly. So as you move to the left on the graph, people’s assessments get wronger and wronger.
The paper shows that it can get Dunning-Kruger-compatible results from simulated data with nothing more than these two effects. A similar result was found way back in 2002. This doesn’t rule out the existence of the effect as postulated by the original paper, and perhaps it’s real in some contexts — “it is possible that the Dunning-Kruger effect may be identified for some cognitive abilities not measured in this investigation”, as the authors of the Intelligence study put it. But most of the time we don’t need it to explain anything.
To some degree, you might think this is all just an interesting but not greatly relevant piece of academic trivia. I think, though, it matters, for two reasons.
One is that Dunning-Kruger itself gets so much attention, at least by the standards of psychological discoveries. It won an Ig Nobel — the comedy mirror of the Nobel prizes — in 2000, and had an opera written about it. An academic pointed out to me that there’s a paternalistic aspect to it: it implies that “my skills license me to think for you”, as he put it. We all point at the stupid people who believe the wrong things, and we say: these people are idiots, it’s OK to ignore them, they’ve just got a nasty case of the Dunning-Krugers.
It’s important that we try to believe in things that are real; so if it’s not real, we shouldn’t believe in it. The DK effect is a high-profile thing which lots of people believe, and I think we ought to be careful to make sure that it’s real. The fact that it’s such an easy way of accusing your political enemies of being stupid and wrong is an extra reason to be wary of it — it feels good to say that someone is a Dunning-Kruger case, rather than engage with the real reasons they might believe the things or behave in the ways that they do.
It’s also yet another reminder that there are deep-seated problems in the social sciences — bad statistical and experimental practice — and, subsequently, a lot of the things that many of us believe are, in fact, false. It’s pervasive, almost endemic. A few days before I became aware of this paper, a thread was going around social science Twitter, looking at the problems the social sciences face. The key takeaway was probably that barely half of all papers — 54% — can be expected to “replicate”: that is, if someone were to run the same experiments again, only about half of them would find the same results.
These are findings which influence real-world policies. Millions of dollars were spent by the Obama administration on policies based on worthless research by the food scientist Brian Wansink, for instance. Growth mindset is on somewhat pretty dubious ground, replication-wise, but influences education around the world. Companies and governments spend millions on implicit bias training to reduce racism, with little to no evidence that it works or that the phenomenon is even relevant to actual, real-world racism.
Dunning-Kruger is part of this same milieu of half-understood and poorly evidenced claims about the human psyche. And people do base business decisions on it (“The best employees will invariably be the hardest on themselves in self-evaluations, while the lowest performers can be counted on to think they are doing excellent work”) and offer advice for how to minimise it in themselves and others (“Always be learning, and promote learning … seek mentors and experts”). These decisions and pieces of advice are based on a misunderstanding of a thing that may or may not actually be real.
As I said: the Dunning-Kruger effect is immensely useful as a rhetorical device; it points to something that we instinctively feel exists, some a-little-knowledge-is-a-dangerous-thing phenomenon in our heads. And it’s such a tempting insult to use. But it seems very likely that it is not as relevant as we would like to think. Rather beautifully, it seems that the people who know the least about Dunning-Kruger are the most likely to overestimate its value.