A decade is a long time in behavioral science. Since the Behavioural Insights Team was established in the U.K. government in 2010, I’ve seen new units, debates, and techniques spread across the globe at breakneck speed. The field is much broader and stronger now than it was when we began, thanks to skill, effort, and painful reckonings.
So when, late last year, I turned to revisit and update the EAST framework, I felt a bit apprehensive. Released in early 2014, EAST has just four main parts and can be easily summarized: if you want someone to do something, make it easy, attractive, social, and timely. For the past 10 years, the EAST framework has been one of the most widely adopted ways of applying behavioral insights in practice.
As I set out to revisit EAST, I wondered: How would this simplicity hold up in a world now wrestling with personalization, generative AI, and complex adaptive systems? Overall, pretty well. But I realized that the more interesting question was why it had.
Answering that question revealed something about the value of simplicity in applied behavioral science. EAST shows how simplicity can offer strength (even though the value of simplicity often gets lost in a tide of glib meretriciousness offering “one weird trick to unlock behavior”). There’s an elegance that comes from a tool that exemplifies its own design principles. And when we learn to appreciate those features in the broader context of applied behavioral science, we gain something else as well. We get a clearer sense of the implicit choices we make when applying behavioral science—and whether these choices need to change.
The push for precision
Let me make a pitch to you.
Human behavior is complex. Many of the challenges that we face—from climate change to crime—emerge from behaviors that are dynamic, interconnected, and often difficult to predict. Contextual factors matter; responses can vary greatly; we need to explore not just “what works” but for whom.
If you buy that argument, the obvious response is to push for more sophisticated approaches to capture this complexity. Multi-site studies, machine learning, adaptive trials, a heterogeneity revolution, moving beyond lists of biases. More elaborate tools to get us better solutions.
From this perspective, a big problem has been the temptation to oversimplify, to flirt with “one weird trick” thinking that identifies a bias and assumes that targeting it will fix the problem. To make blanket claims about human nature without thinking enough about cultural variation. To imply that changing behavior is easy, when actually it’s hard.
This view is pretty compelling and has driven behavioral science forward. I’ve pushed various aspects of it myself. Yet I worry that it misses a couple of important risks.
As intuitive as it seems, creating a complicated approach may not be the best response to complexity.
The first is fragmentation. As a recent paper puts it, “psychological constructs and measures suffer from the toothbrush problem: no self-respecting psychologist wants to use anyone else’s.” So instead you invent your own specialized tool—79 percent of which are not reused more than twice, partly because we lack strong theories to knit things together. What looks like greater precision that captures nuance is actually growing fragmentation and confusion.
The second risk is discouragement. This push can lead to models or frameworks with dozens of factors that people are expected to work through in order to find an optimal response to complexity. Although skilled professionals may be able to do this, many busy people facing practical problems will not bother. Aiming for the optimal may therefore lead to a worse outcome. As intuitive as it seems, creating a complicated approach may not be the best response to complexity.
It may be worth considering another perspective.
Simplicity for resilience
“Focus more on usefulness.” That was the conclusion of a recent review of how psychology got so fragmented. In this view, we should judge a framework mainly by how much it’s used. That idea leads you to consider why frameworks like EAST are popular—and there’s a good case that it’s their simplicity. For example, I know that EAST emerged from requests by people using the existing MINDSPACE framework for something even simpler.
This line of thinking does something important—it considers the behavior of people using the tools. Arguably, the push for precision has focused too much on the new techniques that behavioral science needs, ironically neglecting the behaviors of the people who are meant to use them. So if we look at how people react to simple frameworks like EAST, what do we discover?
One striking thing about these frameworks is how many are acronyms: EAST, BASIC, FORGOOD, APEASE, and so on. I don’t think this is simply chance or imitation, but rather a response to the realities of applying behavioral science. We may think that practitioners are always implementing precise technical plans—and they certainly try. Yet much of the time they are satisficing or muddling through, trying to make progress with limited understanding in an uncertain world.
The push for precision has focused too much on the new techniques that behavioral science needs, ironically neglecting the behaviors of the people who are meant to use them.
From this perspective, these acronyms offer heuristics for understanding heuristics. Of course, that approach creates risks: heuristics can become biases if used the wrong way. But I think we tend to overestimate the risks of this happening because it’s intuitive to think that simplicity must be simplistic. Instead, we need to recognize that simple tools can be used in a sophisticated way—and the key to doing that is to think about design.
Frameworks as design objects
“A main advantage of EAST is that it is easy, attractive, social, and timely—it lives its principles really well.” These words from Jared Peterson have stuck with me. What this insight shows is how the potential uses of EAST are built into its design. Creating acronyms allows easy recall in pressured situations. Keeping the design simple keeps it accessible—which requires judicious choices about which elements to exclude.
I think we should see these as design choices. A central principle of modern design is that objects are created to allow certain uses, often called affordances. For example, a teapot should have some means for hot water to enter and hot tea to exit without harming the thirsty user.
The point is that an object does not need to be made more complicated in order to offer more or better affordances—in fact, it might need to be made simpler. Leidy Klotz has written a whole book on how we are biased towards adding things rather than subtracting them. But that’s not always true in design, where many famous movements have pushed for minimal elegance that means form follows function (much like Jared’s point).
Think of the BIC Cristal pen (or what you might think of as simply the ballpoint pen): it’s a tool so simple that it often goes unnoticed, yet so effective that it’s been used by everyone from schoolchildren to designers, artists to bureaucrats. It’s a design classic that is in the Museum of Modern Art in New York because its simplicity allows many uses. It’s cheap, accessible, and can be used to draw squiggles or masterpieces.
If we think about how frameworks are used, design principles show that discerning choices can create simplicity that allows a range of uses. Simple frameworks can be used in a skilled or unskilled way. But that raises a final question: what is that skill?
The role of practical wisdom
The push for precision has focused more on developing techniques than understanding how people use them (or don’t). But this is beginning to change, as people start to document the specific choices and judgments they made when running studies. The fact is, behavioral scientists can’t avoid judgment calls. What’s interesting is that many of those choices are similar to the ones that designers make.
It’s intuitive to think that simplicity must be simplistic. Instead, we need to recognize that simple tools can be used in a sophisticated way.
Personally, I realized quickly that a core skill for behavioral science was judging what to exclude. Policy briefs do not work if they are too long. Projects do not move if you are trapped in the literature review. Presenting unrealistic options drains your credibility. This skill is only going to become more important, since generative AI is likely to shift our roles more toward assessing and selecting evidence and ideas rather than producing them.
I think this idea makes some people uncomfortable—they would prefer if absolutely every aspect of every approach was systematized, nailed down, explicit. But I think that’s an unrealistic expectation; when you’re dealing with practice, there will always be a role for what Aristotle called phronesis—practical wisdom or know-how—that is difficult to explain fully.
Thinking about practical wisdom doesn’t condemn us to obscure mumblings about “craft.” Behavioral science itself has examined how expert judgment can be explained as non-conscious pattern recognition. So part of our goal should be to understand how we form those abilities, so we can use simple tools in expert ways. Of course, the push for precision can and should continue—but it should continue with a realistic view of how behavioral scientists behave, and with an appreciation for the value of judicious simplicity.
Disclosure: Michael Hallsworth is a member of the BIT which provides financial support to Behavioral Scientist as an organizational partner. Organizational partners do not play a role in the editorial decisions of the magazine.