Bill Congdon has spent the last several years in the center of behavioral science and policy in the United States. In 2014, Bill went on loan from ideas42 to become a Fellow and founding member of the Social and Behavioral Sciences Team (SBST), where he worked to incorporate behavioral science into federal policy. Following the SBST, he served as the senior economist for labor and behavioral economics at the Council of Economic Advisers. Now back at ideas42 as chief economist (full disclosure: I work with Congdon at ideas42), Congdon and I connected to discuss his experience applying behavioral science in government. In the interview below, we discuss lessons learned from the Social and Behavioral Sciences Team, the future of behavioral science in government, and how small nudges can inform big policy changes.
DJ Neri: The Social and Behavioral Sciences Team did a lot of work to make existing government programs more effective and efficient. But some commentators note that most nudges are actually used to make existing—and perhaps failing—programs or policies more effective rather than replacing them. Are there any specific examples or opportunities in the future to replace a costly or complicated policy or regulation with a cheaper and more effective nudge?
Bill Congdon: The way I would think about it is less in terms of nudges and traditional policy levers—such as taxes or subsidies—as being substitutes for one another, but more that they are often complements. If you are using both of them effectively, they can be more effective together.
For example, think of the work that Emmanuel Saez and others have done around the presentation of tax incentives for retirement savings, whether they are presented to people as a tax credit or as a match to savings, and how that can affect their effectiveness. In that instance, the underlying program or policy is a traditional tax incentive, and you can make it work more effectively in terms of how you present it.
Even in instances where you see research or directions that suggest there are types of nudges or behaviorally informed policy levers that look like substitutes for traditional policy levers, that more often leads you down the path of thinking about how you would redesign those policies. I think one of the places where you might see a nudge and a traditional policy lever looking like closer substitutes is automatic enrollment. On the one hand is automatic enrollment in retirement savings plans and on the other are tax incentives. You see studies such as the ones that Raj Chetty and his co-authors did using Danish data suggesting that the actual incentive effect, at least on the margin, of the tax benefit is relatively small and produces little new retirement savings, while also being very expensive for a government to administer. In that instance, default enrollment, which is cheap from the government’s perspective, holding constant the level of the subsidy, is actually relatively effective in generating new retirement savings. I think that leads you to a place of rather than, “We should just automatically enroll everyone and not use tax incentives,” to think about, “Maybe we should rethink the form of the transfer.”
In the case of the United States, the tax expenditure for retirement savings is something like $100 billion per year. We don’t have direct evidence like we do in the Danish case, but the incentive effects on the margins might be relatively small. But the evidence from the behavioral literature might affect the way you think about designing the transfer. So if you don’t think that tax deduction is relatively effective at inducing new savings but you still want to transfer money to individuals to support or supplement their retirement savings, maybe in this instance this makes you think about doing that in the form of tax credits or other benefits rather than tax deductions, which have this regressive nature as a subsidy.
DJ: It sounds like nudges are just another tool in a policymaker’s toolkit. Maybe where the field is going is identifying where, in what context, and with which populations nudges can be most effective. I’m thinking specifically of a recent paper by John List et al. finding that a financial rewards program was actually more effective at reducing energy use among low-usage and/or low-variance households than the traditional home energy report nudge. Is the next step identifying which tool works best for which population and moving forward with that approach?
BC: I think it is. One direction you’ve seen the field start to evolve in is thinking about how to target nudges. It’s both the case that nudges are another set of tools in the policy-making or program-operation toolkit, and that the larger sort of approach of thinking about behaviorally informed policy making is inclusive of nudges. But nudges don’t define the entire space. You have another line of research thinking about ways in which behavioral insights, research from behavioral economics, integrates with how we more generally think about policy making and policy design.
One direction you’ve seen the field start to evolve in is thinking about how to target nudges.
DJ: Thinking about the recent Milkman et al. paper, which shows that on a large scale nudging can be a cost-effective way to improve government operations, will behavioral scientists continue to refine their methods of operating at the margins, having small- to medium-sized but meaningful effects, or are there pathways for behavioral science to have an even bigger effect in shaping policy?
BC: This is a case where the answer is “both/and.” Behavioral scientists can contribute to policy in both ways. It’s definitely right that behavioral science and behavioral economics can be integrated into policy making in ways that can go beyond nudges, beyond the sort of smaller changes to the way we operate or administer programs that do seem to be cost-effective, as we discuss in that paper, but in terms of absolute effect sizes are often relatively modest.
The fact that a particular nudge works or doesn’t often can illuminate something more fundamental about the mechanism by which a policy is successfully operating or could be improved.
But even in a lot of these instances it will be the case that when behavioral scientists are working with policymakers in implementing and testing nudges, the fact that a particular nudge works or doesn’t often can illuminate something more fundamental about the mechanism by which a policy is successfully operating or could be improved. And so even in the case of implementing nudges, I think they implicate larger questions related to policy design.
To give you an example, think of the influential work done by Eric Bettinger and his coauthors related to providing application assistance to families filling out the FAFSA (Free Application for Federal Student Aid). The intervention in that research is helping people fill out the FAFSA at tax time. A relatively direct application of that type of a nudge, in terms of policy applications, was allowing students to pull information from their tax returns, trying to simplify the form, and shortening the amount of time it took to fill out. I think that you also saw, in subsequent research and subsequent policy proposals—including bipartisan policy proposals—people drawing out the implication that the fact that the complexity of that form was deterring or delaying students in matriculating into college probably suggests that the underlying targeting criteria are miscalibrated in the sense that the program requires more information than is perhaps necessary to sufficiently target and direct those benefits to students who need them. So you get from that nudge what looks like more structural policy implications and proposals to do things like more radical simplification.
DJ: Let’s turn to a paper by Tannenbaum, Fox, and Rodgers about partisan nudge bias. It essentially says people like nudging in government when they like the policy objective and don’t like it when they don’t like the policy objective. When David Cameron, who’s center-right, helped create the U.K.’s Behavioral Insights Team [BIT], he was criticized by those on the left; when President Obama created the Social and Behavioral Sciences Team [SBST] in the U.S., a team that had a similar purpose and scope, those on the right were very critical. But it seems that a lot of the work the BIT and SBST do should, in theory, be relatively uncontroversial and bipartisan, as they’re often trying to make existing programs more effective but also more efficient too. Have you experienced this kind of “partisan nudge bias” in your time at the SBST? Is there a way to address it or overcome it?
BC: I think your premise is correct. Taking a behavioral approach to policymaking doesn’t have any intrinsic partisan valence. It is, in many cases, about making existing programs or policies work better. The behavioral approach is, in a lot of ways, just updating or providing new information on how we think people make decisions and act on them. And to the extent that there’s a relationship between how well policies meet whatever goals policymakers and decision makers set depends on behavioral response, a behavioral approach can inform that.
The behavioral approach is, in a lot of ways, just updating or providing new information on how we think people make decisions and act on them.
One thing that you saw with the SBST, and I think this is a bit true for the U.K. BIT as well, is that there is something potentially ominous-sounding about using “behavioral science” to inform policy in the abstract. I think when people hear that, they can load bad associations onto it or think of bad applications of it. One of the things SBST experienced, and I think was also true in the U.K., was that this often dissipates when people see specific applications. So, when you see that this is about helping students stay in college or workers save for retirement, it becomes less objectionable. The other thing that was true at SBST was that the transparency in this work was essential, that it was not a hidden project. SBST was committed to doing and discussing the work it was doing openly and publicly. Reporting and transparency are essential to breaking down some of the objections to this approach.
DJ: Are there instances where to you nudges might feel like a bit of a Band-Aid that’s actually delaying a larger-scale change instead?
BC: Policy-making is on some level fundamentally about solving problems. Families lack access to health insurance, workers need support when they lose a job, students struggle to pay for college, and you use the levers you can to make them better as you can. Sometimes what’s available is a policy change; sometimes what’s available is a nudge.
DJ: In your time at SBST did you notice any trends in what worked and what didn’t? Did you gain a better understanding of how to apply behavioral science in government?
BC: In terms of trends in what types of nudges either worked or didn’t—for example, was loss framing often successful or not—at the end of the day our N was pretty small for drawing those types of conclusions. It’s really interesting to think about a sort of meta-analysis of this type of work.
That said, and while there are different specific implementations of it and the specific type of the nudge can look different across contexts, one general theme in a lot of this work was that simplification tended to be helpful. There were a lot of instances where, for reasons that sort of made sense as they accumulated, programs became complex for individuals to optimize their interactions with. Hassles accumulated, and so there was effective work to be done in making things simpler, to borrow Cass Sunstein’s framing.
Another trend that emerged from a lot of the work with SBST was a sense for where there were important behavioral dimensions that existed across a lot of different types of policies or programs, and this is what you saw articulated in some of the areas of emphasis, in the executive order and in some of the accompanying guidance documents that were issued along with it.
These included this idea that small barriers and complexities could be deterrents to people successfully accessing programs that they otherwise qualify for. That there are a lot of instances in which programs are just trying to communicate with individuals or families or beneficiaries, and we know things from behavioral science that can help us structure or present information in ways that are more likely to lead to understanding on the part of the audience. How you structure and present choices in programs has a lot to do with how effectively people can make use of them. Where the goal of the policy or program is to provide an incentive to encourage or discourage specific behaviors, we know there’s a rich set of insights from behavioral science for how to make those incentives most effective and efficient.
DJ: The idea of nudging and behavioral insights teams started primarily with behavioral economists and psychologists. As behavioral science grows toward ubiquity, what other fields do you see as having a lot to offer to this work? Is there any shortfall in behavioral economists’ and psychologists’ knowledge that could be filled by other disciplines?
BC: The answer is definitely yes. SBST drew principally on psychologists and economists, but even there we had experts from other fields including political science. A lot of the effectiveness of groups such as the SBST lies in bringing together policymakers and program officials with behavioral science experts in whatever field they’re from and engaging in direct collaboration on problem-solving in support of either policy design or program administration. In terms of what makes for an effective composition of that kind of a group, there is one piece of it which is to make sure that it’s drawing on all of the potentially relevant social and behavioral scientific fields. It’s also important to have those groups work directly with policy- and program-side people.
A lot of the effectiveness of groups such as the SBST lies in bringing together policymakers and program officials with behavioral science experts in whatever field they’re from and engaging in direct collaboration.
DJ: Do you see any advantages or disadvantages to operating at a more local level, with a behavioral science governmental unit? Maybe a better understanding of the context, but on the other hand there might be less of a potential to have a large impact. Have you thought through any of those consequences or advantages?
BC: I think there’s a tradeoff, which is that in many instances, even in federal programs such as SNAP or Medicaid, the delivery of those programs, the point at which individual beneficiaries or families interact with those programs, is often at the state or local level. I think there are opportunities to do a lot of rich work at the state or local level by virtue of that fact. But it remains the case that when you are thinking about some of the larger scale policy implications, it feeds back to, and I think has implications for, policy design at the federal level.
DJ: Given that there’s been a substantial increase in the amount of behavioral science experimentation taking place in the public, private, and nonprofit sectors, do you think that the locus of practical or applied behavioral science knowledge is shifting away from more formal academic research and towards more disparate groups performing their own kind of experimentation and tests of nudges? Do you think that has any positive or negative consequences for the field and for the sharing of knowledges and best practices?
BC: On the one hand, more implementation and more testing can in principle mean more knowledge, and that should be beneficial. It does require the ability to see and aggregate all that work, and the mechanisms and institutions for being able to do that are obviously imperfect. The other thing is that you learn different sorts of things from academic research and from applied policy work. Academic research is in many cases more deliberately trying to test mechanisms and potentially learn more generalizable lessons, and just generally more focused on advancing the frontiers of knowledge. That is different from testing—even testing that uses the same sorts of methods—inside of policies and programs where the focus is often learning what works or what doesn’t.
DJ: What is the future of behavioral science and behavioral science teams in governments? Will we see centralized teams or behavioral experts inserted into different agencies and departments? Do you think that much of the work will be happening at the federal level or will it transition more to state and local governments?
BC: I think part of the utility of the approach of having a central team works in serving a coordinating function—I don’t know that that is necessarily played out. At the same time, I do think it’s right that you probably will see this approach being increasingly adopted and individuals working in this fashion at specific agencies across government, particularly at the federal level. There are certainly opportunities to do the same style of work in states and at other levels of government. It goes back to this general approach, which is wherever the effectiveness of policies or programs in meeting the goals set forth for them by policymakers or society relates back to human behavior, there’s likely to be something from the behavioral perspective that’s relevant for making those programs work effectively. You see that at all levels of government. The ways insights are translated from research into programs and policies, whether on central teams or expert staff in individual agencies, there’s still plenty of room for experimentation and optimization there.
This interview was edited for length and clarity.