Why Governments Need to Nudge Themselves

This article is part of our special issue “Nudge Turns 10,” which explores the intersection of behavioral science and public policy. View the complete issue here.

For the past decade, most of the work applying behavioral science to government has focused on influencing citizens. And yet policymakers are also affected by the same cognitive biases that they seek to address in others. Does that mean that their decisions are also flawed?

It’s a question we get asked often at the Behavioural Insights Team, where we spent a long time working within the U.K. government as the world’s first “nudge unit.” We’ve seen firsthand many examples of poor policy decisions—and no doubt made several ourselves. But academics and practitioners have focused mainly on understanding the biases of the public, not the biases of policymakers themselves. Therefore, we’ve had to answer this question by falling back on classic studies that do not reflect the latest advances in behavioral science.

Policymakers are also affected by the same cognitive biases that they seek to address in others. Does that mean that their decisions are also flawed?

Recently, however, we have seen a wave of new experimental work that provides empirical evidence of biased decision-making by real politicians and officials, with potentially serious implications for the way governments are run. This new research allowed us, in our new Behavioural Government report, to turn the lens of behavioral science back onto government itself. We found that several biases are particularly relevant to policymakers, including:

Confirmation bias. Politicians in Denmark who were given performance statistics about two hypothetical schools (one publicly funded, one privately funded) were much less likely to correctly identify which school was performing better when the answer clashed with their ideological preferences. The difference was huge: 92 percent chose correctly when the answer was aligned with their beliefs, and only 56 percent when it was not. Perhaps more alarmingly, when politicians were given more information, they actually performed worse, relying more on their prior attitudes.

Framing. The presentation of policy ideas and choices greatly affects what governments end up doing. For example, politicians and officials were consistently more likely to choose a risky policy option when it was presented in terms of how many deaths it might prevent rather than how many lives it might save. This result was found in experiments with 154 politicians across three national parliaments, 2,591 staff from the World Bank and U.K. Department for International Development, and 600 Italian public-sector employees.

Illusion of similarity. Policymakers struggle to differentiate their own experiences from those of the public they serve, often overestimating how much people will understand or embrace the policy in question. Their own deep involvement in the policy may make them assume that people will be paying attention, grasp what the policy is trying to achieve, and go along with it—none of which may be true. For example, a recent study showed that policymakers greatly overestimated how many parents would make even a small effort to sign their children up for a new educational intervention.

Overconfidence. A recent study of 597 U.S. climate change officials found that they tended to be overconfident in their knowledge and abilities, particularly when they had more years of experience. Moreover, this overconfidence also meant they were more likely to make risky decisions—a problem if this risk-taking is based on false assumptions. Another study found that politicians who were overconfident in their chances of re-election were more likely to make a risky policy choice; there was no relationship between risk-taking and a more objective measure of re-election chances.

Taken together, the evidence suggests that policymakers—those implementing nudges—are susceptible to biases themselves. But the question remains: Do these biases undermine the case for nudging? We argue that their existence means behavioral insights are needed more, not less. While public officials are just as vulnerable to biases as anyone else, officials act within institutions that can be changed to mitigate or eliminate those biases.

While public officials are just as vulnerable to biases as anyone else, officials act within institutions that can be changed to mitigate or eliminate those biases.

For example, framing effects can be addressed through reframing strategies, strategies that help actors change the presentation or substance of their position, in order to find common ground and break policy deadlocks. Confirmation bias can be mitigated by adopting a “consider the opposite” approach, which involves asking, “Would you have made the same judgement if exactly the same study had produced results on the other side of the issue?” And overconfidence can be curbed by running “pre-mortems,” which ask decision makers to imagine that a project they are working on has already failed, and then to work back to identify why things went wrong. This process encourages people to explore doubts and highlights weaknesses, which can then be addressed. There is emerging evidence that pre-mortems can be successful in real-world settings, but they are still not widely used in policymaking. We think they should be.

Over the past decade, the use of behavioral insights by governments has led to many gains for citizens and consumers. It’s now time to apply these insights directly to government itself. By drawing on the best evidence-based solutions, we can improve the way policy is made in local, national and regional governments worldwide. The upshot is that we now have a better answer to that opening question: yes, policy makers are themselves affected by cognitive biases—but there is something we can do about it.