As someone who became an economist via a brief career as a lawyer, I did notice that my kind had privileged access to the halls of government and business. Whether this was because economics can speak the language of dollars, or that we simply claimed that we had all the answers, the economists were often the first consulted (though not necessarily listened to) on how we priced, regulated, and designed our policies, services, and products.
What I lacked, however, was a privileged understanding of behavior. So about a decade ago, with the shortcomings of economics as an academic discipline top of mind, I commenced a Ph.D. to develop that understanding. It was fortuitous timing. Decades of research by psychologists and “misbehaving” economists was creating a new wave of ideas that would wash out of academia and into the public and corporate spheres. Famously encapsulated in Richard Thaler and Cass Sunstein’s Nudge, there was now a recognized need to design our world for humans, not “econs.”
Following Nudge, a second wave found many organizations creating their own “nudge units.” The Behavioural Insights Team (BIT) within 10 Downing Street was the precursor to government behavioral teams around the world. Although the first dedicated corporate behavioral units predated the BIT, a similar, albeit less visible, pattern of growth can be seen in the private sphere. These teams are now tackling problems in areas as broad as tax policy, retail sales, app design, and social and environmental policy.
A world in which we take advice only from economists risks missing the richness of human behavior, designing for people who don’t exist …
On net, these teams have been a positive and resulted in some excellent outcomes. But my experience working in and alongside nudge units has me asking: Has the pendulum swung too far? My education and experience have proven to me that economics and the study of human behavior are complements rather than substitutes. But I worry that in many government departments and businesses, behavioral teams have replaced rather than complemented economics teams. Policymakers and corporate executives, their minds rushing to highly available examples of “nudge team” successes, often turn first to behavioral units when they have a problem.
A world in which we take advice only from economists risks missing the richness of human behavior, designing for people who don’t exist. But a world in which policymakers and corporate executives turn first to behavioral units has not been without costs. A major source of these costs comes from how we have been building behavioral teams.
We have been building narrow teams. We have been building teams with only a subset of the skills required to solve the problems at hand. When you form a team with a single skillset, there is the risk that everything will start to look like a nail.
… But a world in which policymakers and corporate executives turn first to behavioral units has not been without costs.
It’s now time for a third wave. We need to build multidisciplinary behavioral units. Otherwise we may have results such as those reflected in the observations below. Some of the observations relate to my own experiences and errors, some are observations by others. To protect identities, confidential projects, and egos (including my own), I have tweaked the stories. However, the lessons remain the same.
The Missing Pieces
No economists (of any stripe): While the rise of the nudge unit was often framed as a way to overcome the hegemony of the economists, economists have their uses. One is thinking about how individual behavior might play out in a market. Despite the “irrational” human agents in many markets, markets often work wonderfully well.
One behavioral team lacking an economist was asked whether customers in their firm’s market would be harmed if this firm or its competitors started to charge different prices to different customer types—a practice known as price discrimination. The boilerplate advice they gave was that people are irrational and lack attention, and that the firms in this market would exploit their customers.
The team might have been right about the ultimate outcome for this particular market. But in their advice, the question of what actually happens in markets where firms price discriminate never entered the picture. There are plenty of markets where price discrimination has been good for consumers. In movie theatres, progressive pricing typically gives access to children, students, and seniors at substantially discounted prices. Was there potential for such a positive outcome here?
Economists are also known for looking at the incentives. Even though the behavioral literature has identified plenty of famous “backfires” from poorly designed incentives, demand curves do generally slope down. Many times, I have seen organizations attempting to design behavioral solutions to prevent their staff from doing what they are paying them to do. If you pay sales bonuses with thresholds, it is difficult to come up with any behavioral intervention that will prevent inappropriate sales on the last day of the bonus period. Yet the incentive design was typically not within the purview of the behavioral team.
No psychologists: Many nudge units are branded as behavioral economics teams, despite their work being better described as social psychology. Most of their time is spent designing communications, such as websites, text messages, and letters. Their work is economics free.
While the branding of their work as behavioral economics might be more economics imperialism (or simply clever branding), often these teams are staffed only by economists. Having economists running around doing social psychology can backfire. They can lack appreciation for the foundations of the interventions they have developed, such as context dependence and the current state of the science.
I met one such team during a presentation in which they were illustrating the power of primes through the Florida effect, five years after the debate over replication of that study first exploded. That same slide deck contained material on the glucose model of self-control and the jam study, oblivious to research and debates subsequent to the original classic papers. The value of having someone connected to the field was clear.
Teams lack more than just social psychologists. Cognitive psychologists can, and often do, play a vital part. Personality psychologists can bring great insights into human heterogeneity. Organizational psychologists are important given how often the behavior of interest relates to work. Comparative and evolutionary psychology can even be useful; we are, after all, evolved animals.
When you form a team with a single skillset, there is the risk that everything will start to look like a nail.
No experience in experimental design: The advocacy of an experimental approach to policy making is possibly the most important contribution of the nudge units, even more important than the behavioral insights themselves.
Running a randomized controlled trial can appear conceptually easy. The statistics required to determine whether there were different outcomes in the treatment group relative to the control can be basic. Online tools can help you work through issues such as setting the right group size.
It doesn’t take many complications, however, to send a trial off the rails. The practical design and implementation of a trial, particularly if it involves anything more than tweaking a letter or webpage, is often complex and fraught with traps. Experienced experimental designers are often invaluable.
I have seen several cases where attrition was ignored, with no understanding of how that attrition differed between treatment and control groups. The treatments were tweaks in website or letter design, but in each case, outcomes were only measured for those who responded. If a trial relates to customers (as some of these trials did), you might not care about the outcomes for those who decided not to choose or use your product. You are willing to live with this experimental problem. But for many public policy issues, understanding the outcomes for the subjects who drop out is often just as critical for assessing the success of an intervention.
Other teams low on experimental experience simply ran into execution problems. Well-meaning managers shifted their best staff out of the treatment and into the control groups. Highly engaged employees improvised additional improvements to the treatment on the fly. Unscalable support was provided to help treatment-group staff implement the treatment. In each case, the headline measure of the treatment effect was not indicative of the likely benefit of the treatment if rolled out. Yet most of those results—particularly in the government space—made their way into the public sphere as stories of success.
While these cases were largely the result of lack of experience, there are also some incentive problems. Many teams are consultancies, or operate in consultancy models, within their organization or government. Despite their genuine belief they are seeking the truth, there is a prize for positive results. The motivation to trumpet a positive result can cause, in the words of one person who engaged a behavioral team, a publication that had no resemblance to the project that they were engaged in.
Given this, there might also be a case for excluding those with experimental-design skills from the behavioral team, and separating development of the interventions from their assessment. Bring in a disinterested party with strong experimental design skills to design and run the experiments. We are a long way from the debacle that is the Cornell Food and Brand Lab (I hope), but we do have the foxes watching the henhouse.
No behavioral scientists: By this, I mean we have none of the above. No behavioral or experimental economists, no social or cognitive psychologists, and no one who has been through a behavioral science program. This was most typical in government and consultancies, where a few public policy generalists had read Sunstein and Thaler’s Nudge, or Daniel Kahneman’s Thinking, Fast and Slow, and formed a team around the idea. I am all for this approach as a start. You don’t have to be an expert to bring new, valuable thinking to a problem. But the pool of ideas a team of this composition draws on can be shallow. They need to tap expertise.
One such unit became disparagingly known as the “letter writing team,” with their main contribution, beyond inclusion of the occasional social norm, being to rewrite letters in plain English. While not wanting to downplay their contribution—almost every communication produced within the organization was characterized by bureaucratic turgidity—they lacked the depth of knowledge required to identify new ways to tackle the scope of behavioral problems in that organization. Without the ability to solve new problems, they were effectively waiting for publication of ideas by other behavioral teams that they could adapt for use in the organization.
No subject matter experts: This might be no one with health experience on a health project. Or a team tackling tax policy despite no experience in the area. Aren’t the people the behavioral teams are working with supposed to have this knowledge? Yes, but often they don’t, particularly in government departments with a philosophy of rotating generalists across the organization.
Tackling a problem without subject matter experts often leads to a situation not dissimilar to that faced by the economics imperialists. The team can bring in some great new ideas, all of which have the risk of falling apart once you add the interesting domain-specific detail. The execution of these projects absent subject matter knowledge often looks like an academic exercise: have a couple of meetings, go away and research for a few weeks, drop back in with the draft report with some suggested interventions for approval. That report is dead on arrival.
Before blaming everything on the way we have been building teams, there is a demand driver too. Politicians, bureaucrats, and corporations have also started seeing nails everywhere.
There are ways to overcome this lack of subject matter knowledge. This might be through deep immersion in the process, observations, and ethnography, allowing the team to become the subject matter experts. I have seen some teams do wonderful jobs at this. Unfortunately, not every project offers the required time or budget.
The Demand Side
Before blaming everything on the way we have been building teams, there is a demand driver too. Politicians, bureaucrats, and corporations have also started seeing nails everywhere.
George Loewenstein and Nick Chater suggested that this reflects a narrow interpretation of behavioral insights in policy and research. The focus is on nudges to address behavioral problems, rather than the wider range of possible responses.
In responding to Loewenstein and Chater, Thaler was “mystified about who they think they are arguing with.” After all, Sunstein and Thaler are clear in Nudge that nudges are only part of the tool kit, not a panacea. It would be hard to find someone who would claim otherwise.
But in my experience of on-the-ground execution of behavioral projects, there is a gap between theory and practice. The commonly held understanding that we need more than nudges and the practical application have not always aligned.
The best way to avoid everything looking like a nail is to carry more than a hammer.
This is reflected most clearly in the number of requests for tender that ask for a “behavioral economics solution” or “behavioral insights solution” to a problem. By this, they mean nudges. Despite the problem itself often being poorly defined, the behavioral hammer is requested.
In one such instance, a charity requested a “behavioral economics solution” to encourage their grant recipients to meet one of their contractual requirements, namely providing data on the use of the grant. The charity had an easy option: simply not pay until the data were provided, as the contract anticipated would occur. Interviews with grant recipients indicated it was easy for them to provide the data, and they were willing to. But once paid, the recipients simply moved on and didn’t provide the data that they suspected the charity wasn’t even using. (Which it wasn’t. This of course raises solution two: not collecting the data. Or possibly the actual problem here is assessment of grant effectiveness.)
Regardless, there was a reputational benefit within the charity for engaging a behavioral team and running behavioral economics projects. There was also an aversion to more direct—and confrontational—engagement with their grant recipients. This aversion is not uncommon. Risk-averse clients often seek the behavioral solution for its inoffensiveness. For the charity, the result was tweaked communications, a slight increase in data provision, yet no real benefit.
A Richer Approach
The best way to avoid everything looking like a nail is to carry more than a hammer. Build multidisciplinary teams with a wider toolkit. Teams that can draw on economists, behavioral economists and scientists, psychologists, data scientists, neuroscientists, domain experts, sociologists, anthropologists, ethnographers, biologists, systems and complexity theorists, designers, creatives, cynics, and skeptics (and more). In government, this means mixed teams of public-policy professionals, not nudge units.
Given the small size of many teams or organizations, this large, multidisciplinary team (or set of teams) is generally not a possibility. We will almost certainly have gaps. But being aware of these gaps and their possible costs can highlight where we should try to tap alternative expertise.
I am sympathetic to the idea that nudge units were (and often still are) required to overcome a major blind spot in public policy and corporations. But that balance has now been (over)corrected in some places. Where that is the case, it’s time to move on.
That said, we might retain the behavioral economics moniker in some instances. Think of it as advertising, or as branding that opens doors. And once that door is open, make sure it’s open wide enough to let in the full team.