In 2015, during a public debate on behavioural science in Lucerne, I was accused of supporting tactics befitting an unsavoury authoritarian regime. At the time, knowing how well-intentioned my colleagues were, I thought this was, quite frankly, nuts.
I remain a supporter of the use of behavioural science in public policy, and of the Behavioural Insights Team, more commonly known as the Nudge Unit. However, witnessing how the UK and other governments have responded to the pandemic, I can now appreciate the vulnerabilities of well-intentioned, democratic regimes, and the potential for behavioural science to be used inappropriately.
I was a co-founder and leading figure within the Nudge Unit. Since its inception, in 2010, the unit has been a success story for the Government. When I joined we were a team of seven within what was called the Prime Minister’s Strategy Unit. In 2014, we were able to “spin out” of government. We became an independent, profit-making social purpose company, a third owned by the Cabinet Office. We could sell our services to the whole of the UK public sector and any other government or organisation seeking to improve people’s lives.
A big part of my role was to introduce behavioural science thinking to public policy challenges in other countries. In doing so, we won new contracts and opened new markets, but also helped to spread the application of behavioural science. In 2010, the Nudge Unit was the first and only government unit dedicated to behavioural science in public policy. By 2021, there were over 400 globally. Post spin-out, we grew organically to 250+ people, with offices in nine major cities across the globe, working on projects in more than 35 countries. As civil servants, we were providing policy advice to the Cabinet Office; as a company we also provided dividends, international influence and, since the company was sold in December 2021, a healthy capital gain.
In the early days, we felt as if we had discovered a secret sauce and our mission was to share it as widely as possible. We had a time-limited licence to roam around government, seeking out areas of policy to improve. A two-year sunset cause stipulated we needed to deliver a 10-fold return on the cost of the team, one of the reasons for our early focus on tax — in a now famous example, we used “social norms” to boost tax compliance by telling debtors the true proportion of other people who have paid their taxes.
It felt very different from other parts of the civil service. We cared a lot more about academic research, and seeing the world through the eyes of the citizen. We would bring over famous academics such as Daniel Kahneman, Richard Thaler and Robert Cialdini for seminars in No. 10. We had strong sponsorship from the PM and reported through a high-level steering board, chaired by the Cabinet Secretary, Gus O’Donnell. We were passionate and hard-working but also informal, personable and fun. We sought to free useful information from the over-intellectualising of academia: our policy papers drew on complex concepts and research but described findings in simple and engaging language.
Most strikingly, in the broader context of recession and austerity, we were a can-do, technically competent team — the opposite of the stereotypically politically astute but risk-averse official. Like a lean start-up, we had high motivation but few resources. If there was no budget to make an app, we would learn how to do it and build it ourselves. We would never say “we can’t do it” but “‘how else can we do it?”. Expertise brought into the civil service from outside often fails as it can’t navigate the bureaucracy. But we were home-grown: we knew how to work with the system.
Richard H. Thaler: on vaccines, 'nudge' isn't enough
There has, though, always been media scepticism about us. When science journalist Ben Goldacre called for us to be sacked, I remember David Halpern, my boss, telling us that the best compliment comes from a former critic. So we invited Dr Goldacre in to see that we were, in fact, doing exactly what empiricists like him advocate — systematically running large-scale randomised controlled trials (RCTs). Dr Goldacre apologised for his article and co-published a paper with us on policy evaluation, which at one point was the most downloaded Cabinet Office paper after the Coalition Agreement.
The more successful our results, the more the media supported us. In one piece of work, my colleagues Prof Elizabeth Linos, Joanne Reinhard and I managed to eliminate the substantial and historic underperformance of ethnic minority applicants to a police force with a cost-neutral and non-discriminatory intervention: adding a line to an email that prompted candidates to reflect on their values and what success would mean to them. The impact of this incredibly simple idea astounded me, but the data was checked, rechecked and published in a peer reviewed journal, and the result still stands.
There are many examples of our successes – some of which are celebrated on the Nudge Unit’s 10th anniversary website. But to my mind, it is the legacy of our approach which will deliver the greatest impact. We advocated two new dimensions to policy making: behaviour-focused models describing what drives human decision making; and the priority of empirical research over all other sources of information. I believe this contribution has — and can — continue to serve governments well. But it must be used appropriately. For me, it means seeing the bigger picture: recognising what you can and can’t measure, and seeing the potential for unintended consequences. For example, testing ways to make it easier for parents to engage with their children’s homework, and measuring improved education and parental engagement as a result, is a great idea. But invoking different emotions to convince people to stay at home during the pandemic is less appropriate. It could have negative consequences that are missed in the typical RCT evaluation. This is because metrics will focus on proxies for behaviour, but they probably can’t capture the potential longer-term effects of these campaigns beyond what is immediately measurable — such as worse inter-societal relations and reduced trust in institutions, the consequences of which could be significant.
Because behavioural science can be so broadly applied, it is equally broadly defined. Of these two examples, the first sticks closely to Richard Thaler’s mantra – making it easier to overcome a clearly defined barrier to achieve an uncontroversial gain; the second, however, feels more propagandistic. In my mind, the most egregious and far-reaching mistake made in responding to the pandemic has been the level of fear willingly conveyed on the public. Initially encouraged to boost public compliance, that fear seems to have subsequently driven policy decisions in a worrying feedback loop. Though I don’t think it’s fair to blame behavioural scientists for propagating fear (I suspect that this was more to do with Government communicators and the incentives of news broadcasters), it may be worth reflecting on where we need to draw the line between the choice-maximising nudges of libertarian paternalism, and the creeping acceptance among policy makers that the state should use its heft to influence our lives without the accountability of legislative and parliamentary scrutiny. Nudging made subtle state influence palatable, but mixed with a state of emergency, have we inadvertently sanctioned state propaganda?
The second principle, empiricism, can also be misused. It is an admirable intention (I have often argued for public services to draw on empirical methods to assess impact and improve outcomes), but not when pursued at the expense of other fundamental principles of good policy-making designed to help us see the bigger picture. Placing all value on data risks de-prioritising reflection, reason and debate — and obscuring the limitations of that data as a depiction of reality. The more we measure something, like Covid infections, the more prominent it becomes and so the more it matters. But what’s most feasible to measure now is not necessarily what’s most important overall. Behavioural science was conceived as a means of recognising and correcting the biases that lead humans to make non-rational decisions. But it’s not obvious to me that the trade-offs many governments are making in their responses to the pandemic are grounded in utilitarian rationality.
Perhaps the most important attribute of empiricism is objectivity. So it has been surprising that the most ardent proponents of empiricism in policy have not been inoculated from identity politics and their own echo-chambers. In fact, it is the proponents of evidence and empiricism, our best and most educated elites, who are now often the least willing to hear information that challenges their worldview or runs contrary to their identity. Many have written about this trend on university campuses. Even at the Nudge Unit, an organisation designed to make others aware of biases, where Jonathan Haidt has lectured staff about his work unpicking the routes of cancel culture, an academic speaker was cancelled for an innocuous, historic tweet.
So where does that leave nudging?
Behavioural science in policy can help us improve people’s lives, but we shouldn’t become complacent – as I was in 2015 in Lucerne – to the potential for unintended harm. We must always look beyond the immediate policy objective and be alive to the critical roles that institutional trust, accountability and legitimacy play in successful societies. Those who commission behavioural experts to provide policy advice must themselves understand how best to use it. The former Cabinet Secretary, Sir Jeremy Heywood, knew how to use the Nudge Unit effectively, to challenge departments to think more expansively, or assess existing programmes with greater scientific rigour. But he also knew when to temper this thinking, when to nudge was less appropriate.
As we’ve learned over the past two years, focusing on “the science” is blinkered, which is why we need multidisciplinary teams, a strong culture of intellectual humility and designed-in cognitive diversity to tackle problems, especially in times of uncertainty. And this commitment to cognitive diversity needs to be deep and effective: at the Nudge Unit, we created a product to de-bias recruitment in order that we would find successful candidates from a diverse range of backgrounds. Despite this, fewer than 1% of staff supported Brexit.
Finally, accepting Kahneman’s doctrine that we are all susceptible to automatic thinking, rather than accommodating it by bending the choice architecture to lead us down the desired path perhaps there are occasions where we can try to jolt people out of their automatic stance into active consideration. I have often argued – like Richard Thaler did here on UnHerd – that there is no neutral way to present a choice. But this doesn’t mean that, in certain circumstances, we shouldn’t try.