Entrenched views? The Science March in Boston . Credit: Scott Eisen / Getty


May 14, 2019   4 mins

When Michael Gove said that the public “have had enough of experts” it was seized upon by anti-Brexiteers as proof that the leave campaign was an exercise in know-nothing populism. It was also a convenient way of linking Brexit to the Trump phenomenon in the US. On both sides of the pond, the idea is that we’re abandoning sweet reason for the politics of fake news and emotional manipulation.

Ironically, it was those attacking Gove who were embracing ignorance and/or mendacity, because they were quoting him out of context. What he actually said was that was “the people in the country have had enough of experts… from organisations with acronyms saying they know what is best and getting it consistently wrong”.

I should say that’s a slightly edited version because as soon Gove as uttered the word “experts”, his interviewer indignantly interrupted him – as if any criticism of experts were an unthinkable outrage. (Here’s the clip so you can judge for yourself).

There are, of course, many examples of experts getting things catastrophically wrong. For instance, the planners and architects who made such a mess of our cities; the nutritional advisors who identified fat and not sugar as the big threat to public health; the educationalists who decided that phonics was an outdated way of learning to read; the advocates of various military interventions in the Middle East; the economists and bankers who didn’t see the global financial crash coming (or the Eurozone crisis).

But am I being over-selective, here? All human beings, no matter how knowledgeable, make some mistakes. However, that doesn’t necessarily mean that expertise itself is the problem – or even any excess of deference to experts on the part of politicians and the media.

Except that there is evidence that in some circumstances, the accumulation of knowledge is at the heart of the matter. Last year, I wrote about the research showing that the most scientifically literate individuals had the most entrenched and inflexible positions on science-based issues such as climate change – and were least open to evidence contradicting their beliefs. In March, I featured a study showing that the most politically polarised Americans tended to be the most educated.

This may seem counter-intuitive, but it really isn’t. The ‘epistemic bubbles’ that let in only certain facts and opinions are not traps set for us by other people; rather, we actively construct them for ourselves, choosing to believe what we want to believe. It’s not surprising that those with the greatest access to information and the greatest ability to use it selectively are able to build the thickest bubbles. Indeed, those with the most intellectual resources at their disposal can reinforce what they build with theoretical frameworks – i.e. abstract systems to guide one’s thoughts along pre-determined paths.

In The Atlantic, David Epstein writes about the work of Philip E Tetlock who has spent decades studying the expert class and the predictions that they make:

“Tetlock decided to put expert political and economic predictions to the test… he collected forecasts from 284 highly educated experts who averaged more than 12 years of experience in their specialties… The project lasted 20 years, and comprised 82,361 probability estimates about the future.

“The result: The experts were, by and large, horrific forecasters. Their areas of specialty, years of experience, and (for some) access to classified information made no difference.”

Tetlock did, however, find a difference in performance between super-specialists, with deep expertise on a particular topic within a particular discipline, and generalists, who worked across disciplines and integrated different forms of sometimes contradictory knowledge:

“Eventually, Tetlock bestowed nicknames (borrowed from the philosopher Isaiah Berlin) on the experts he’d observed: The highly specialized hedgehogs knew ‘one big thing,’ while the integrator foxes knew ‘many little things.’”

Tetlock’s key finding is that foxes tend to make more accurate predictions than hedgehogs.

“Incredibly, the hedgehogs performed especially poorly on long-term predictions within their specialty. They got worse as they accumulated experience and credentials in their field. The more information they had to work with, the more easily they could fit any story into their worldview.”

The foxes, however, weren’t nearly so stubborn.

So, foxes are better than hedgehogs? Not always. For example, if you had to have an operation to save your sight, who would you rather trust – an eye surgeon with 20 years’ experience, or a young GP with a broad interest in medicine? Or say you had to repair damage to the Mona Lisa, who do you call in – someone who’s spent her career meticulously restoring renaissance paintings or a brilliantly experimental art student?

The answer in both cases is the specialist not the generalist, the hedgehog not the fox. But that’s because in these examples expertise is continually tested by reality. Though the experts accumulate scholarly knowledge and develop theoretical frameworks these would count for nothing if their record was one of “getting it consistently wrong”.

The experts we need to fear are those for whom getting it wrong has no professional consequences. Their predictions, if listened to by decision makers, may be highly consequential, just not for themselves. Being wrong, therefore, is no impediment to their continued influence.

Does this mean that we should only listen to the experts who aren’t immune to reality – those with ‘skin in the game’ as Nassim Taleb puts it? No, because the ability to think in abstractions is one of the things that makes us human. In some domains, it’s the only option we’ve got – for instance in making long-term preparations for a possible future that no one has yet experienced.

There is surely room for pure theory and for making predictions on that basis. Not every economist has to be an entrepreneur; not every scientist an inventor.

However, we do need to distinguish between the two kinds of expert: the practitioners and the theorists. While we can be reasonably confident about the hedgehogs in the former category, we must be sceptical about those in the latter:

“In Tetlock’s 20-year study, both the broad foxes and the narrow hedgehogs were quick to let a successful prediction reinforce their beliefs. But when an outcome took them by surprise, foxes were much more likely to adjust their ideas.”

In the end, there’s no beating the school of hard knocks; but where the abstract realm is only way of approaching a problem, curiosity and a willingness to embrace contradiction are the closest substitute.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_