In hindsight, it is easy to see why the field of behavioral science has become popular. In addition to Nudge, popular books like Thinking, Fast and Slow, Predictably Irrational, and Misbehaving offered the promise of seemingly simple, low-cost, and scalable interventions that organizations could adopt to create large changes in stakeholder behavior. And the claim that all organizations—governments, businesses, not-for-profits, policy units, startups, and even universities—are fundamentally in the business of behavior change rang true for practitioners.
If the fictitious cafeteria in Nudge could encourage people to eat healthy foods simply by changing the manner in which food was displayed, or a magazine could encourage consumers to purchase pricier subscriptions by using a decoy product, many felt they could use the same tactic to increase demand for their products.
If changing defaults, social norms, the fresh-start effect, or reminders are indeed general phenomena that work to change behavior, it is tempting to believe that they can be generalized from one particular context at a particular point in time to other contexts at other points in time. And it is also true that we have seen a large number of successes in using behavioral science to solve social and business problems across a multitude of contexts.
But it has now been over 14 years since the publication of Nudge and more than 10 years since the first behavioral unit in government started functioning. While we have made a lot of progress as a field, we believe that the applied science is at a critical juncture. Our efforts at this stage will determine whether the field matures in a systematic and stable manner, or grows wildly and erratically. Unless we take stock of the science, the practice, and the mechanisms that we can put into place to align the two, we will run the danger of the promise of behavioral science being an illusion for many—not because the science itself was faulty, but because we did not successfully develop a science for using the science.
We offer six prescriptions for how the field of applied behavioral science can better align itself so that it grows in a systematic and not in a wild manner.
1. Offer a balanced and nuanced view of the promise of behavioral science
We believe that it is incumbent on leaders in both the academic and applied space to offer a balanced view of the promise of behavioral science. While we understand that the nature of the book publication process or of public lectures tends to skew on additives to highlight success, we also believe that it is perhaps more of a contribution for the field to highlight limitations and nuances. Rather than narratives along the lines of “A causes B,” it would be helpful for our leaders to highlight narratives such as “A causes B in some conditions and C in others.” Dissemination of this new narrative could take the form of traditional knowledge mobilization tools, such as books, popular press articles, interviews, podcasts, and essays. Our recent coedited book, Behavioral Science in the Wild, is one attempt at this.
2. Publish null and nonsurprising results
Academic incentives usually create a body of work that (a) is replete with positive results, (b) overrepresents surprising results, (c) is not usually replicated, and (d) is focused on theory and phenomena and not on practical problems. As has been discussed elsewhere, this occurs because of the academic incentive structure, which favors surprising and positive results. We call on our field to change this culture by creating platforms that allow and encourage authors to publish null results, as well as unsurprising results.
We also would encourage academia to create opportunities and reward authors for publishing empirical generalizations, meta-analyses, and nuanced literature reviews that can push the science of using behavioral science further. Finally, we also encourage academics to conduct large-scale mega-experiments with organizational partners. In this approach (pioneered by the Behavior Change for Good Initiative), multiple behavioral science interventions could be tested simultaneously as part of a larger field experiment. This allows research teams to identify which strategies work best, under what conditions, and for whom.
3. Prepare the organization
We call on practitioners to develop their own framework for skills, resources, and structures that need to be put into place to create a behaviorally informed organization. A recent book coedited by Dilip, The Behaviorally Informed Organization, presents a series of essays and frameworks that practitioners could use to accomplish this. The essence is to create an organization in which the cost of experimentation is low, and an organizational structure that can adapt to quick evidence. Failure to do so might mean that even the best use of the existing evidence might not result in success because of the organization’s inability to learn and adapt.
4. Publish contextual details
Academic papers, unfortunately, do not provide enough details about the implementation of experiments or the context in which those data were collected to allow practitioners to assess relevance to their own context. We call on the field to more systematically require researchers to outline key experimental design features, contextual details, and what they believe (or better: know) are the necessary requirements for their interventions to work. The more standardized this reporting, the easier it will be to compare effects across studies.
5. Use in-situ evidence and avoid borrowing interventions
Context changes the results of experiments. Because it is highly likely that the context of a published experiment is different from the context in which a practitioner is looking to change behavior, results will likely differ. Practitioners could attempt to borrow interventions from a metaphorical “nudge store,” or iterate, test, and adapt an intervention that was published elsewhere in a tailored approach. As with apparel, a tailored approach takes more time, effort, and money, but results in a better product!
It is critically important for practitioners to (a) use the published intervention as a starting point rather than as the final solution, and (b) create a culture and the ability for rapid testing in the context in which the intervention would be deployed. This can only be accomplished if the costs of experimentation are low.
We also encourage practitioners not to simply read the results of a single study or a single set of interventions, but rather also to read meta-analyses or structured literature reviews. Both could provide additional insights into when an intervention might work and when it might not, and perhaps also comment on different ways in which the same intervention was actually designed and delivered across different contexts.
6. Embrace heterogeneity
Solutions need not always be homogeneous to successfully scale. The mental model of successful scaling seems to argue that scaling is only successful with a one-size-fits-all intervention. With diversity in how people think, act, feel, and consume information, as well as the greater diversity of contexts and media used to make choices, the likelihood that a one-size-fits-all intervention solves the problem is increasingly low.
However, advances in data sciences and machine learning now provide us with better tools to identify heterogeneity. As a result, we believe that successful scaling might include a mosaic approach in which the researcher might have different variants of an intervention, and the practitioner deploys these different variants to different subsegments in the population as a function of learning about heterogeneous responses.
Beyond developing these prescriptions, we also called on leading behavioral scientists from academia, business, and government to provide a nuanced perspective on their area of expertise. We compiled their insights in our recent book, Behavioral Science in the Wild.
The book covers some of the most prominent interventions, such as reminders and the fresh-start effect and outlines when they are most likely to work and when they’re not. One section summarizes the current knowledge of what works well in specific domains, ranging from financial decision making to sustainability to diversity and inclusion to health and well-being to misinformation. Other chapters focus on the particular challenges when implementing behavioral science in low-income populations and the global south and what to look out for when translating the science from a nondigital to a digital world.
We, alongside the other authors, believe in the ability of behavioral science to tackle tough social and business problems. That said, our point is simple. As we increasingly transplant ideas from the laboratory or controlled pilot settings into the wild, it is simultaneously increasingly important for the field to pay attention to the nuances of the science of how to use behavioral science. Failure to do so might result in a wild and erratic, rather than a systematic, growth and impact of the field.
Adapted from Behavioral Science in the Wild coedited by Nina Mažar and Dilip Soman. Published by University of Toronto Press. Copyright © 2022 University of Toronto Press. All rights reserved.
Disclosure: Dilip Soman leads the Behavioral Economics in Action at Rotman (BEAR) Research Center, which was a 2021 Supporting Partner of Behavioral Scientist.