Let’s Talk Less About ‘Irrationality’

This article is part of a series based on “A Manifesto for Applying Behavioral Science” from the Behavioural Insights Team. In each article, Michael Hallsworth draws from the manifesto’s agenda for the future of behavioral science and offers a new angle on current thinking. This week, he argues that using the term “irrationality” may encourage overconfidence that prevents behavioral scientists from looking more deeply into what’s driving behavior. View the series here

We need to recognize the limits to our knowledge as behavioral scientists. Several factors can push us toward being overconfident in our judgments and in predicting the likely consequences of our actions.

We’re not alone. Overconfidence affects experts in a wide range of fields. Elsewhere I have talked about how and why policymakers can be overconfident in their predictions about people’s behavior. But there have been criticisms specific to behavioral science that we need to consider.

Behavioral science may be perceived as providing a technical justification for seeing decisions as flawed, and thus in need of corrective action. A good example is the behavioral economic concept of “internalities,” which involve a struggle between a person’s present self (who may want a cake) and their future self (who desires to be healthy). In practice, policymakers often choose to prioritize the future self, which often loses out otherwise. These behavioral concepts, and the language of bias and irrationality, can end up increasing the confidence of choice architects in terms of what people “really” or “should” want, and why they are acting as they are.

In reality, it may be that people are acting on the basis of reasons that a choice architect has not perceived or anticipated. People may have specific contextual knowledge that cake is unlikely to be available in the near future, for example. Indeed, new evidence from 169 countries shows that, as economic environments worsen, “there is a stronger and more consistent tendency to discount future values.” A recent study showed that apparently irrational loss–gain framing effects may actually reflect an awareness that others may punish you for ignoring such effects.

So the ready technical explanation offered by behavioral science could provide confidence that obscures the need to search more deeply for less obvious explanations. Behavioral scientists may then rely on decontextualized principles that do not match the real-world setting for a behavior. Such “contextual brittleness” is likely to lead to poor outcomes.

The ready technical explanation offered by behavioral science could provide confidence that obscures the need to search more deeply for less obvious explanations.

These criticisms may go too far. There are many instances where behavioral scientists have avoided the issues just outlined. For example, BIT has always emphasized the importance of understanding behaviors in context and in depth, going beyond ostensible explanations. Here are some examples of what we found.

Energy switching: Many households do not switch energy suppliers, despite the potential for large savings; this inertia may seem particularly “irrational” for lower-income households. But during our work on this topic, we came across stories that made the inaction seem more reasonable. Over time, lower-income households (in particular) may have learned the collections process of their current company. They know that a “final warning” is not final, and they still have two more weeks before they lose power. Effectively, they can then use the power company as a line of credit to prioritize other payments. But they lose this valuable knowledge if they switch providers. Apparent inertia may actually be a considered strategy.

Health-care choices: Our previous work on reducing patient waiting times in the U.K. had identified that primary care doctors were generally sending patients to the same set of secondary care providers. The issue was that these referrals happened even when those providers had no capacity. Patients would experience long waits, even though there were nearby alternatives with good availability. In many cases, these choices were as suboptimal as they seemed. But deeper exploration showed that sometimes other factors were at play, such as patient transport. Larger, better-known hospitals tended to attract more public transport routes than the alternative providers. So although these alternatives were all within a few miles of the patient, in some cases they might be less accessible.

Road use: Going into our work to improve the health and safety of food delivery agents in Australia’s gig economy, we knew that many workers were frequently breaking road rules. In particular, they rode bicycles on footpaths and other pedestrian-only areas. This was happening despite strong disincentives: there was a significant risk of incurring fines that could wipe out a whole day’s earnings (and social media posts from workers suggested that they were well aware of this risk). However, observing the behaviors in context revealed that the footpaths often ran next to narrow, busy roads with heavy truck and bus traffic. So the agents were trading off the risk of a fine against the risk of injury or death on the roads—a different calculation from the one we had perceived initially.

How can we make this kind of deeper inquiry more likely? Here are three proposals: avoid the term “irrationality”, embrace epistemic humility, and design processes and institutions to counteract overconfidence.

Avoid the term “irrationality”

Let’s put aside the long-running and complex debates about how to define rationality and irrationality. My main concern is with the consequences of how “irrationality” is used in practice (I’m not saying that people should not discuss the concept). The act of diagnosing irrationality in others seems to imply that you yourself are rational, or at least have the ability to detect rationality. At the same time, the act can delegitimize the views of the “irrational” party—their disagreement is not valid because they are not playing by the rules of reason, unlike you. Doing so can lead to failure to understand the reasonableness of people’s actions. Of course, we need to avoid setting up straw men representations of how behavioral scientists think, as some critics do. But I think the use of the label “irrational” may increase overconfidence and impede deeper inquiry. Dropping it may be a necessary, but not sufficient, way of solving these problems for practitioners.

The use of the label “irrational” may increase overconfidence and impede deeper inquiry.

Embrace epistemic humility

Epistemic humility is based on “the realization that our knowledge is always provisional and incomplete—and that it might require revision in light of new evidence.” For behavioral scientists, this might involve recognizing what initial inquiries are essential, rather than simply reaching for a familiar theory, concept, or intervention and applying it to the situation at hand. It might be about pausing to reflect on how far existing knowledge can be transferred between contexts and domains. It might be about recognizing the possibility of backfires and spillovers, of recognizing how goals and preferences (including our own) may be complex and ambiguous. The COVID-19 pandemic has shown just how difficult it can be to predict behavior, and could act as a spur for recognizing why greater epistemic humility is needed.

Design processes and institutions to counteract overconfidence

While behavioral scientists should be familiar with the concept of overconfidence and its causes, applying such ideas to our own actions is much harder. Instead, we should look at how to design and redesign the contexts in which behavioral scientists are making decisions in order to promote greater humility. In its Behavioural Government report, BIT has already shown how policy makers are often affected by issues such as optimism bias and illusions of control. We then set out a series of institutional changes to reduce their ensuing overconfidence, including premortems, reference class forecasting, and break points. What are the equivalent changes to processes that might reduce overconfidence among those applying behavioral science?

_

So, let’s be more aware of how focusing on “irrationality” risks shutting down the kind of inquiry that reveals what’s really going on—and how things could actually be improved. Let’s continue exploring the idea of irrationality in the academic sphere but prevent it from stopping exploration in the realm of practice.


Disclosure: Michael Hallsworth is a member of the BIT, which provides financial support to Behavioral Scientist as an organizational partner. Organizational partners do not play a role in the editorial decisions of the magazine.