The “Save More Tomorrow” plan (“SMarT”) is one of the best-known examples of an effective nudge. Through the use of defaults, auto-enroll, auto-increases, and limited selection of plans to reduce choice paralysis, it produced enviable metrics for increasing 401(k) participation and savings.
But workers of all ages increasingly cobble together multiple part-time (and benefits-free) jobs, and recent Pew research has indicated that over 40 percent of Millennials don’t have access to a 401(k). Even of those who do, many are professional serial monogamists for whom frequent 401(k) rollovers are more hassle than boon, with no guarantee that their next employer will offer the SMarT program, or one like it.
Or consider blind auditions, which were initially implemented to address the fact that orchestras were overwhelmingly male. Judged only on ability, thanks to a curtain that hid visual cues to the auditioner’s gender and a rug that masked tell-tale footfalls, women gained a 50 percent higher likelihood of making the finals, leading to a two- to five-fold increase in female musicians. New digital hiring applications, like Applied, use technology to accomplish similar aims and increase equity.
But once hiring is over, support for maintaining diversity often dwindles. The temptation to mentor those who seem like younger versions of ourselves is strong, which all too often reinforces the existing status quo of ethnicity and gender, and even management style. The senior ranks of most institutions are still dominated by men, and even the Boston Symphony Orchestra—the very organization that first introduced blind auditions to increase gender parity—was sued in 2018 by their female first-chair flutist for paying her less than a male counterpart.
Finally: voting. Often efforts have focused on the ease (or lack thereof) of making it to the ballot box to increase the likelihood of voting. This can take many forms, including making voter registration easier or automatic, communicating information about when and where polling places are open, and helping citizens make a plan to vote.
These are important and worthy efforts. But these messages presume that people feel their voice will be heard in the first place. United States elections in 2016 and 2018 had relatively “high” turnout. Yet, around 40 percent of eligible voters did not cast a ballot in 2016, and more than half stayed home in 2018. For too many American citizens, who feel politically disenfranchised, voting may seem more pointless than empowering. Clearly, there’s more to the problem than ease.
SMarT helped some people save more into traditional retirement vehicles, yet many people are exceptions to the rule, and increasingly so. Blind auditions helped increase the number of women who made it in the door, but don’t address barriers to true inclusion and equitable compensation. Appeals to voters that emphasize access assume that voting is considered inherently valuable. 401(k) deductions, hiring processes, and “get out the vote” campaigns lend themselves particularly well to behavioral problem-solving in large part because they are discrete and well-defined, but focusing on these problems without considering the bigger picture can lead to the equivalent of removing splinters when the patient has a broken arm.
So what counts as the “right” kind of problem for behavioral science to solve? Put more bluntly: How might our sense about what we should solve, or even what qualifies as a problem worth solving, be biased by how we think about what we can solve?
How might our sense about what we should solve, or even what qualifies as a problem worth solving, be biased by how we think about what we can solve?
We all have blind spots, informed by our incoming norms and prior experiences. But those biases can have significant implications when they inform what gets funded or built, what data is used to make decisions, and especially how we identify which problems to select in the first place.
More than we might expect, this is shaped by factors that have little to do with content, and more to do with deeply embedded assumptions within the processes and methodologies we use. The ways in which we’re accustomed to solving problems shape what is perceived as a good fit (gauged against the criteria we’ve established as a field) or possible (given our go-to methodologies), or even conceivable (if incompatible with our mental models). In other words, the nature of inquiry often informs what even gets inquired about and what gets left behind.
In the case of behavioral economics, problem identification often starts with a handful of characteristics: a grounding in findings from research studies; a focus on behavioral change; the ability to demonstrate impact, with randomized control trials (RCT) as the “gold standard” of measurement; the potential to scale; and an orientation to helping people achieve what’s in their better interest (“nudging for good”). On their face, these are all fine attributes. But when we dig a little deeper we can see that this small set of assumptions leads to a cascade of biases about what we choose to solve for.
Hammers looking for nails?
Over the years, behavioral economics experiments have supplied a rich source of evidence and examples about the nature of cognitive biases. The value of using these findings as a starting point for problem identification is, on the one hand, completely sound: Why not take advantage of this solid evidence base? But when a search for problems is filtered through the lens of the familiar, we run the risk of disregarding important, but further adjacent, challenges because they don’t fit what we’re looking for. Starting with results and looking for situations where they apply, or seeking out conditions that fit what we know, can result in solutions looking for problems—holding a hammer and looking for nails to hit—rather than first identifying the broader set of problems that might benefit from a behavioral perspective.
Data and evidence of prior success are also, by their very nature, backwards-looking. We literally have no data about the future because it has not been generated yet. This means we need to be wary of designing solutions that only consider, or are optimized for, current situations at the expense of ignoring more nascent or emergent ones. Exclusively considering what’s worked before can also lead to a bias toward well-known problem-solving approaches, rather than considering potential new avenues that are untested.
We need to be wary of designing solutions that only consider current situations at the expense of ignoring more nascent or emergent ones.
The cautionary tales of businesses that failed to adapt to shifting contexts provide a useful analogy for the dangers of only looking backwards: The best choice architecture in the world would not have prevented Blockbuster Video’s demise at the hands of Netflix when streaming media content displaced brick-and-mortar stores, or rescued Kodak from sinking into oblivion when it moved too slowly to recognize the shift to digital photography.
In a similar way, the changing nature of work and retirement, among other things, beg for exploring behavioral approaches that extend beyond 401ks or other traditional investment vehicles. Failing to address this blind spot will likely result in solutions that are ideal for audiences who operate within our current systems, but ignore those who don’t have access, either by choice or by design. While our 401(k) participants are steadily building a nest egg thanks to programs like SMarT, others who are equally in need of long-term savings assistance remain on the outside looking in. This not only underserves “non-participants” who might also benefit from behavioral attention, but also limits our ability to see opportunities to improve or extend systems to include them.
The need for a systems view
When deciding which problems to solve, we often start with identifying key breakdowns in existing processes or experiences in order to improve them. For behavioral economics, this often takes the form of seeing where people trip themselves up and figuring out where we might encourage better behaviors. In other words: What kind of behavioral change should we try to achieve?
The success of behavioral change interventions is tightly tied to their ability to demonstrate impact, and our ability to gather measurable, quantitative proof of outcomes—Was this intervention effective?—as well as proof of causation—Are we confident this intervention created the change? Precisely defined problems and interventions that can be easily isolated and removed from contextual noise or systems effects are ideal for demonstrating these two factors, since fewer variables make it easier to show clear cause-and-effect. Adjusting specific inputs or processes that correct for cognitive biases, like blinding incoming resumes or auditioners to gender or ethnicity, allows us to much more easily compare one mode (unblinded) against another (blinded) to see if new interventions create real and meaningful change.
In doing so, however, we may focus exclusively on “last mile” challenges and fail to consider upstream causes that have far more variables in play, are often more difficult to isolate, and sometimes require measuring what didn’t happen. Focusing on increasing the number of people who get flu shots, for example, may mean we fail to consider other dimensions of the bigger picture goal of reducing disease transmission, such as the ways in which their new sense of security may reduce other important weapons against germs: facemasks and regular, rigorous hand-washing.
It can also blind us to unintended consequences. Good faith efforts to reduce environmental waste by curtailing plastic bag usage for groceries also removed the source of bags that were formerly reused as garbage can liners or for pet waste disposal. As a result, purchases of packaged plastic bags—which tend to be less biodegradable—and use of paper bags—which demand significant tree and water resources to produce—have increased substantially.
While flu shots and plastic bags provide examples of seemingly straightforward behavioral change that have further-reaching implications, our bias toward problems that are easily reduced to discrete moments may also preclude our consideration of behaviors and relationships that shape our decisions more dynamically. In an organizational setting, for example, providing mentorship and informal feedback or supporting junior colleagues all contribute to important long-term goals like diversity and employee retention. Yet these activities typically occur informally and sporadically and are considerably more difficult to isolate and measure. While blinding resumes may get the right folks in the door, in other words, it may not be sufficient to keep them around.
To be clear, this isn’t an advocacy to dismiss “last mile” solutions, which are valuable, potent, and necessary tools. But when this becomes the only approach—when we focus only on narrow definitions of behavioral change, rather than designing for behavior in a more nuanced way—we may limit the larger effectiveness of solutions in the long run. This is important, in part, because when we fail to consider whether the system within which interventions take place is sound and equitable, and how correcting for one kind of change might leave out other important considerations or lead to unintended consequences, we risk optimizing behavior that perpetuates or amplifies broader systemic or cultural biases.
When we fail to consider whether the system within which interventions take place is sound and equitable…we risk optimizing behavior that perpetuates or amplifies broader systemic or cultural biases.
Interventions to gently steer people from more processed foods to healthier options, for example, often focus on adjusting the choice-making context, messaging, or physical environment. But when cost is a more salient factor than health for people experiencing food insecurity, these nudges may benefit people who can more readily pay a higher price over those who feel they can’t afford to do so, even if we agree on the merits of healthy eating. We must recognize when values and tradeoffs—the personal equation of cost, convenience, identity, or wanting to belong—that inform people’s choices are variable or limited, and when “good” behaviors are simply easier for some than for others.
Our ability to step back and question inherent system bias will only become more important as we strive to increase the reach and efficiency of behavioral solutions. Many interventions that take advantage of automation in order to scale assume equal access to digital channels, but disparities can shape or amplify who receives interventions and who does not. SMarT, for instance, relies on direct deposit, a non-option for many lower-income individuals who are more likely to be paid in cash or pre-paid cards. Even seemingly benign digital solutions, like apps that track behaviors to deliver more personalized support or increase our self-control, can feel like surveillance to more vulnerable populations.
Looking inward…and outward
The unprecedented speeds with which the nature of work, education, health care, and services are changing will increasingly force us to challenge our own biases when managing uncertainty, and to resist the urge to design only for what we already know or for today’s status quo. But this need to question our assumptions has always been with us. Public policy itself has never been objective, but crafted by multiple stakeholders, each with their own goals and agendas. And let’s be honest; behavioral scientists—highly educated, professionally successful—are a privileged set. This means that in addition to the biases instilled in our methodologies, we must be extra aware of the ways in which our personal experiences, perceptions, and priorities contribute to what feels “normal.” Although few would argue that good health and financial security are desirable ends, the individuals and cultures we’re designing for might not share the same definition of “good health,” are unlikely to make the same tradeoffs to achieve it, and almost certainly don’t share the same universal set of contextual factors.
As any behavioral scientist knows, bias is less good or bad than simply the state of being human, and the value of identifying our own latent assumptions is hardly unique to any one individual or discipline. Given that the field of behavioral economics was established on the very premise that bias plays an underappreciated role in shaping humans’ perceptions and behaviors, however, we must be extra-skeptical that our own backyard is bias-free. After all, behavioral work itself doesn’t occur in a vacuum, but within the context of academic rewards, grants, and career trajectories.
To solve problems and suggest solutions on behalf of others is to have power. As a result, we behavioral scientists have a heightened responsibility.
When academic success relies on publications, it naturally increases our tendency to focus on research that has a higher chance of getting accepted. This can pull us toward focusing on certain topics at the expense of others, keeping us tethered to well-known and historically embraced ways of thinking, or investing in niche topics, of interest to specialists in the field, but with little practical value.
To solve problems and suggest solutions on behalf of others is to have power. As a result, we behavioral scientists have a heightened responsibility: Being in this privileged position requires recognizing when and where assumptions about “what good looks like” might creep in. When we design interventions—even just determining what options are available, or what the default choice should be—we shape other peoples’ experiences in ways we may not always fully appreciate. And our decisions to address certain problems while leaving others aside implicitly declares what challenges, and audiences, we think are worthy of receiving attention.
It’s also worth remembering that behavioral economics can’t—and was never intended to—solve for everything, and increasing awareness of its methodological biases is not an advocacy to upend what it does well. What ultimately matters is solving consequential problems in the most effective way; this means acknowledging when bias might create potential blind spots, but also being open to inviting other disciplines to the table. Fields like design, with its comfort zone in ambiguity, unknown futures, and systems solutions, may be an excellent partner for applying behavioral findings to the thornier, more dynamic aspects of challenges that behavioral economics is less inclined to wrestle with. There’s certainly no shortage of important human challenges that would benefit from a behavioral lens; solving them well will require both self-awareness within behavioral science’s fields, and a healthy appetite for cross-disciplinary collaboration.