A few years before Nudge, two seminal papers were published, each of which provided a powerful illustration of the importance of “choice architecture.” They showed how seemingly small changes in how choices are presented can have large effects on people’s decisions without changing the underlying choice itself.
When Eric Johnson and Daniel Goldstein compared consent rates for organ donation in different European countries, they found much higher rates in countries with a presumed consent (opt-out) default policy than in countries with an explicit consent (opt-in) default policy.
Then, Richard Thaler and Shlomo Benartzi, who experimented with retirement plan design, found that reluctant employees could be nudged into saving, simply by offering them the option to commit to a future increase in their savings rate.
Both cases illustrated how policymakers can nudge people by carefully considering how they presented people with choices. Policymakers in the public and private sector took notice. Chile and Wales switched to a system of presumed consent for organ donation in 2010 and 2015, respectively; more recently, the Netherlands passed a similar organ donation law that will go into effect in 2020. On the retirement-saving front, the majority of defined contribution 401(k) plans offered to U.S. employees at present have adopted at least part of the “Save More Tomorrow” strategy.
Now, 10 years after Nudge was published, it’s clear that many nudges turned out to be highly cost-effective compared to more traditional policies. It’s also clear that things do not always go as planned. Sometimes nudges fail, bring about unintended side effects, or even backfire.
As we enter the second post-Nudge decade, policymakers should consider and evaluate how their nudges are being interpreted to ensure they have the intended effects.
Take the process of introducing a new organ donation default policy in the Netherlands. The month after the nation’s House of Representatives narrowly passed a bill intended to change the consent procedure, the number of residents who registered as nondonors spiked to roughly 40 times the number observed in previous months. Even more remarkable was that this dramatic jump in active rejections occurred also among residents who had previously consented to donating.
Another example of a nudge that did not work out as expected: when employees at four major U.S. universities were offered the chance to precommit to future savings, their savings went down in the nine months that followed.
To understand why changes in choice architecture may have unintended effects, it is crucial to realize that the people who are making the decisions—potential organ donors and university employees, for instance—are not always naive and passive targets. It is true that sometimes people may simply choose a default option without giving it a second thought, but that’s not always the case—sometimes they will try to make sense of how a choice is presented, and their interpretation can profoundly influence their behavior. As we enter the second post-Nudge decade, policymakers should consider and evaluate how their nudges are being interpreted to ensure they have the intended effects.
When Targets Interpret Nudgers’ Intentions
People are often unsure about what option to choose. Whether to consent to organ donation or not, how much to save for retirement; those are difficult and important decisions, clouded by uncertainty. But this does not mean that people will always follow the easiest path.
Instead, people may look for cues in the choice architecture that can help them come to a decision. They may try to make sense of who is presenting the choice to them and of why this choice architect is presenting options in a particular way. Finally, people may consider what their own response will signal to the choice architect and to other people.
Researchers have long recognized that defaults can be particularly powerful when they are interpreted as implicit endorsements or recommendations. Thaler and Sunstein noted in Nudge that “in many contexts defaults have some extra nudging power because consumers may feel, rightly or wrongly, that default options come with an implicit endorsement from the default setter, be it the employer, government, or TV scheduler.” But this kind of sensitivity to social cues implicit in choice architecture can also bring about unwanted or unexpected responses to nudging attempts.
People were attempting to glean information from subtle cues in choice architecture—information about the goals and beliefs of the policymaker and information about what would be a fitting response.
In the case of the Dutch organ-donation law, residents may have recognized that lawmakers were attempting to increase consent rates. The proposed policy change may have been construed as an attempt at coercion—as a threat to the freedom of choice that people value so highly—which provoked many to rebuke that attempt by opting out as a way to signal their displeasure.
As for the dip in retirement savings observed after universities introduced a precommitment option, a small detail in the plan’s design seemed to be the culprit. The original Save More Tomorrow plan presented the precommitment option only to people who had previously failed to enroll in a 401(k) plan. In contrast, the more recent implementation provided employees with a direct choice between initiating saving today versus initiating saving later. Employees may have inferred that contributing to the plan was considered not urgent by their employer. Why else would they offer the option to delay it?
What these examples have in common is that the decision makers were not passive targets; they were active sensemakers—in other words, people were attempting to glean information from subtle cues in choice architecture—information about the goals and beliefs of the policymaker and information about what would be a fitting response. They were actively looking for signals to help them figure out what was going on and how they should act.
An Updated Choice Architecture Framework
In a recent article in Behavioral Science & Policy, David Tannenbaum, Craig Fox, and I argued that it is time for an updated framework of choice architecture (choice architecture 2.0); this update incorporates an explicit analysis of the implicit social interactions between decision makers and policymakers. Choice architecture 2.0 recognizes that people sometimes, though not always, act as social sensemakers when confronted with a decision and that this factor can be critical to the success or failure of nudges and other behavioral policy interventions.
Once you know where to look, it’s easy to find more examples in which people seem to act as social sensemakers in response to changes in choice architecture. Credit card customers lowered their average monthly payments after minimum-repayment information was introduced to their credit card statements, possibly because they interpreted the minimum repayment number as a suggested amount. Physicians prescribed less aggressive pain medication when the menu from which they selected had the aggressive options lumped into a single category, possibly because patients inferred that options listed separately were more popular among peer doctors. Shoppers in stores that introduced a small surcharge on the use of plastic bags were more likely to bring their own reusable bags from home, in part because the surcharge implicitly communicated social norms about waste reduction.
Choice architecture 2.0 recognizes that people sometimes, though not always, act as social sensemakers when confronted with a decision.
Sometimes nudges work because people are concerned about what their behavior signals to the policymaker, or to others around them. Think of airline captains who started flying more fuel efficiently, or of doctors who reduced their rate of inappropriate antibiotics prescription. In both cases, the improvement in behavior occurred after the decision makers—the airline captains and the doctors—had simply learned that they were part of a research study. Instead of interpreting these so-called Hawthorne effects as a nuisance in empirical research, we should view them as potent new tools in the nudging toolbox.
At the same time, nudges sometimes fail or backfire for similar reasons; people care about what their behavior signals. The negative response of many Dutch residents to the proposed change in the organ-donation consent procedure is an example of this. People seemed to use opting out as a signal of protest against the new law. In fact, many Dutch residents publicly shared through social media their decision to opt out.
Introducing a Social-Sensemaking Audit
These and other cases indicate the need for a more systematic understanding of social sensemaking in choice architecture. As a first step toward that goal, policymakers and researchers should routinely engage in a social-sensemaking audit, which would identify potential triggers of social sensemaking to help design more effective policy.
Auditors would ask, in other words: When do decision makers take on the role of naive, passive targets, and when do they act as sensemakers who look for social cues in the choice environment? Although empirical research on this question is scarce, it seems reasonable to think that people are more likely to engage in sensemaking when they are uncertain about their preferences, when they distrust the person or institution they hold responsible for the design of a choice environment, or when they notice a change in the choice environment.
A social-sensemaking audit also aims to anticipate the type of inferences that decision makers are most likely to make about the beliefs and intentions of the choice architect. If sensemaking occurs, will it lead to greater compliance with the intended goal, as with the implicit social norms driving reuse of plastic bags, or is there potential for backfiring, as with the option to precommit to retirement saving that was provided to university employees?
Incorporating these considerations into our understanding of choice architecture can prevent the implementation of unsuccessful nudges as well as promote the implementation of more effective nudges, including ones that have been overlooked in the past.