Nudging, now more than a decade old as an intervention tool, has become something of a poster child for the behavioral sciences. We know that people don’t always act in their own best interest—sometimes spectacularly so—and nudges have emerged as a noncoercive way to live better in a world shaped by our behavioral foibles.
But with nudging’s maturity, we’ve also begun to understand some of the ways that it falls short. Take, for instance, research by Linda Thunström and her colleagues. They found that “successful” nudges can actually harm subgroups of a population. In their research, spendthrifts (those who spend freely) spent less when nudged, bringing them closer to optimal spending. But when given the same nudge, tightwads also spent less, taking them further from the optimal.
While a nudge might appear effective because a population benefited on average, at the individual level the story could be different. Should nudging penalize people that differ from the average just because, on the whole, a policy would benefit the population? Though individual versus population trade-offs are part and parcel to policymaking, as our ability to personalize advances, through technology and data, these trade-offs seem less and less appealing.
Should nudging penalize people that differ from the average just because, on the whole, a policy would benefit the population?
One way to solve this problem may be through personalized nudging, a design that respects individual differences and, in theory, improves both individual and net welfare.
The idea of personalization isn’t new. As early as 2011, Kirstin Appelt and colleagues argued decision-making research needed to start respecting individual differences, an argument Cass Sunstein also made in a 2012 paper. Sunstein explained that simply knowing that people are different is not enough. Choice architects need to know who is different from whom, and how. This requires data—think social media data, personal financial data, personality data—that is relevant to the context in which a personalized nudge might be used.
What’s different now, nearly a decade on, is that developments in information technologies (as well as some novel developments in statistical and data analysis) means personalized nudges are becoming increasingly feasible.
This development is fortuitous for another reason: there is less low hanging fruit, which is to say fewer opportunities to intervene impersonally, while the desire to tackle tougher challenges remains. If behavioral scientists are going to move into more domains and help solve more complex problems, tools that allow for personalization are a welcome addition to the tool kit.
How can nudges be personalized? Choice and delivery
In a recent paper, I explored two ways nudges could be personalized: by personalizing the choices or by personalizing the delivery.
Choice personalization means that we can personalize the outcomes we nudge people toward. For example, say John and Susan both want to save for retirement, but a 6 percent contribution rate is too much for John and too little for Susan. We could personalize a retirement savings intervention by nudging John toward a lower contribution rate (say, 4 percent) and Susan toward a higher contribution rate (say, 8 percent).
In the world of education and entitlement uptake, Lindsay Page and colleagues have done some compelling work on choice personalization. Using an automated text-message service connected to a database of FAFSA applications, the researchers were able to personalize the reminder text sent to individual students. For a student who had yet to begin their application, the text message would prompt them to start; for a student who was in the middle of the application, the message would prompt them to finish; and for a student who was finished, the message would remind them to prepare for various follow-up requirements. The use of choice personalization here is intuitive, and according to the authors of the study, personalized text messages also contributed to better education outcomes, like higher university enrollment rates.
Developments in information technologies means personalized nudges are becoming increasingly feasible.
It’s worth noting that this study had something uncommon going for it—it was an instance where the data needed to create targeted text messages is relatively easy to acquire and use. In other instances, such as the retirement savings example above, the data required may be much more elusive.
Delivery personalization means that we can personalize which nudge we use, such as using a default or social norms, in the service of a similar goal. In other words, we are personalizing how someone is nudged. For example, if Sasha tends to procrastinate and Rory cares about the opinions of others, Sasha would likely be more receptive to a default nudge, whereas Rory would be more receptive to a nudge featuring social norms.
Eyal Pe’er and colleagues have done noteworthy work on delivery personalization in the world of cybersecurity. The authors used psychometric testing to develop decision-making styles for individual participants (by responding to questions like “I put off making decisions because thinking about them makes me uneasy” and “When I make decisions, I tend to rely on my intuition”).
Using that data, Pe’er matched decision-makers to the nudge they’d be most receptive to, in this case nudges designed to encourage stronger online passwords. The evidence from this study suggests delivery personalization does indeed produce stronger passwords than nonpersonalization.
Of course, there’s a data question here too: psychometric data is not easily available, and there are ethical questions regarding whether and how much psychometric data should be gathered. Furthermore, without pre-existing hypotheses about, say, personality and nudging, the door may be open to speculative data collection, which prompts further ethical questions, particularly those concerning privacy. These problems exist for choice personalization too, but while the data required for choice personalization can come from revealed preferences (e.g., what you chose in the past), delivery personalization seems less suitable in this regard.
There are two additions to this two-component framework that choice architects should also keep in mind. First, choice and delivery personalization can be used separately or in conjunction. Facebook, for instance, will show more photos to people who click on photos, and more videos to those who click on videos, all the while targeting people with advertisements based on their revealed preferences. It is quite easy to see how the former (delivery) and the latter (choice) can be combined.
Second, it’s important to acknowledge that sometimes our analysis might (and should) suggest that the best option is no nudge at all. Not (intentionally) nudging is always an option. When selecting from a range of nudges, choice architects who care about maximizing welfare should always keep this counterintuitive idea in mind.
The limits of personalization
Personalization can help choice architects respect individual differences. But when and where to personalize is a delicate question.
Some occasions seem rather intuitive—Kai Ruggeri and colleagues have argued health care is a well-suited policy area for personalized behavioral interventions, because everyone’s individual health is personal.
But work by Anastasia Kozyreva and colleagues finds mixed acceptance of personalization. They find personalization is generally viewed positively when it is used in the individual domain, such as when recommendation algorithms nudge customers toward products personally curated for them, but viewed less favorably when used in the public domain, such as when recommendation algorithms target individuals with personalized political advertisements. The immediate lesson here is that personalization should not be universally applied, and choice architects should work to understand the context in which it is being used.
There is a significant data challenge associated with personalized nudging. I am purposely vague about what “data” means in the context of personalization, because often choice architects cannot know what data is necessary to build a personalized nudge.
Additionally, as Cass Sunstein has argued, choice architects should only collect and use data that is (foreseeably) relevant. We can always find differences—color of eyes, favorite flavor of ice cream, and so on—and we should avoid endlessly stratifying samples under the pretense of personalization.
Nevertheless, choice architects are at the frontier of an evolution in nudging, one that will hopefully hone the power of nudging even further and one I hope we can embrace.