Recently, a friend of mine shared this simple thought: “My ultimate goal is to change people’s behavior. Behavior change techniques are powerful enough tools. I do not need to know what the brain does.”
It’s true. You can easily change people’s behavior while ignoring the brain, just as you can perfectly write software without knowing anything about a computer’s hardware. But are we so sure that knowing what the brain does isn’t important for behavioral scientists? Can neuroscience improve behavioral science? And what is the risk of failing to keep up with the latest developments in neuroscience?
I argue that neuroscience can improve behavioral science in two main areas. First, it can offer an efficient framework for rationalizing the growing list of behavioral biases. Second, it can increase our understanding of people’s behavior and our ability to predict it. In a companion piece, I will argue that the most effective way to change behavior may actually be to change the brain.
The How and the Why
The field of behavioral economics has accomplished much by describing how people’s decisions are systematically biased. For instance, people tend to underestimate how long it will take to complete a task, often ignoring past experience (the planning fallacy), and selectively attend to information that conforms to their preexisting beliefs (confirmation bias).
It is a hard pill to swallow admitting that we behavioral scientists are still at an early stage with our “zoo of behavioral biases.”
However, psychologists and behavioral economists struggle to explain why people are biased. For example, the planning fallacy may result from our tendency for optimism and overconfidence. Similarly, confirmation bias is a combination of overconfidence and optimism bias, anchoring (overreliance on information encountered early in a process), and even cognitive dissonance (discomfort with new information that contradicts existing beliefs). But none of these additional behavioral concepts explain why people reject information that contradicts their beliefs.
As Owen D. Jones, a professor at Vanderbilt University Law School, recently observed, explaining one cognitive bias with another is like “saying that rain is caused by a rainstorm.” What causes these biases? The truth is that behavioral scientists have no definite answers—but neuroscientists might.
An increasingly influential hypothesis in neuroscience, known as predictive coding, may offer the best explanation of confirmation bias to date. In few words, predictive coding is the idea that our brain predicts our environment actively, rather than registering it passively. Prior beliefs or expectations—for example, that London buses are red—shape our current perceptions and beliefs, such that we look for a red shape while waiting for the bus without even being of aware of it.
Crucially, by predicting—instead of passively registering—our environment, predictive coding allows our brain to conserve cognitive resources and guide our perception and action in a fast and efficient way. But this also means that what our brain notices and attends to is heavily determined by what we already know.
From this perspective, it is easy to see how predictive coding explains our tendency to spot confirming evidence more readily than disconfirming evidence. And because most of these predictions are performed unconsciously, we are unaware of how our prior beliefs blend with new information from the real world. When it comes to explaining cognitive quirks like the confirmation bias, the brain is basically an engine of prediction.
Unfortunately, simply continuing to record a long list of behavioral biases will not provide a satisfactory explanation or understanding of these biases. On the contrary, it risks burying the core determinants of human behavior under a cornucopia of highly specific phenomena. Consider the classification of biological organisms, where building a rational and minimal taxonomy represented a milestone in understanding the evolution of plants and animals, including ourselves. Geneticists and molecular biologists, by detailing the inner functioning of organisms, were instrumental in building an efficient theoretical framework for the life sciences.
It is a hard pill to swallow admitting that we behavioral scientists are still at an early stage with our “zoo of behavioral biases.” Our current stage is similar to when Darwin was first observing different types of finches in the Galapagos Islands. There were many interesting variations, which were curiosities in and of themselves, but it was not until he produced the theory of natural selection that these variations fit a rational theoretical framework. Behavioral science has yet to articulate its theory of natural selection, but neuroscience may be able to guide it.
By using neuroscience to prune behavioral concepts to relevant brain substrates, we can rationalize the zoo of biases. The outcome would be a simpler framework, with a map of behaviors observed in different situations linked to core cognitive functions. Such simplification has already begun and could both help communication among behavioral scientists and lead fundamental and applied research in new directions.
Putting Neuroscience to Use
Neuroscience can help to rationalize the zoo of behavioral biases, but does it have a practical value for behavioral science practitioners? The evidence suggests that it does.
For instance, by studying the way brains change as we age, neuroscientists can help address one of the major challenges for the next generation of behavioral scientists: how to target behavioral interventions for the vastly different brains of people of different ages, cultures, and socioeconomic levels.
By studying the way brains change as we age, neuroscientists can help address one of the major challenges for the next generation of behavioral scientists: how to target behavioral interventions for the vastly different brains of people of different ages, cultures, and socioeconomic levels.
Our brain changes constantly, from the earliest stages of childhood to adulthood and senescence. It takes between 12 and 17 years for a human brain to mature, but not all brain regions develop at the same pace. Brain structures implicated in decision-making, notably, are among the last to mature and the first to be affected by senescence.
In this regard, a huge limitation in current behavioral investigations is that behavioral scientists typically target only the young adult brain.
Recently, it has been noted that behavioral scientists tend to make their conclusions based on samples taken entirely from western college students, a narrow sample of human variation. Why is this group so heavily studied? First, college students represent a convenient and cheap target group for academics. Second, targeting different groups with the same behavioral test is difficult, perhaps impossible. Behavioral tests (for example, “Do you prefer $20 now or $100 in three months?”) could be understood in a very different way by a 12-year-old than by a 65-year-old.
To assess differences among individuals, one objective alternative is “neural indexes.” Neural indexes are brain signatures of specific behaviors. Modern neuroscience has demonstrated that we can now use neural indexes to spot behavioral biases in different populations. Many cognitive biases (like risk aversion, the endowment effect, or framing effects) have already been reduced to specific brain structures or networks, enabling neuroscientists to expand the samples to people of different ages.
With this new understanding of how our brains change with time, neuroscientists can compare cognitive biases across ages. Is a 28-year-old more or less risk averse than a 70-year-old, on average? Neural indices can help answer this question by assessing the age at which each cognitive bias arises or is most powerful.
How might this knowledge benefit behavioral scientists looking to influence behavior? Given that age-related differences in the brain can predict age-related differences in behavior, behavioral scientists could tailor interventions to their specific population of interest.
Moreover, these insights make it possible to pre-empt new cognitive biases. Indeed, one particularly promising direction is to focus on behavioral interventions targeted at young children. For example, Nudging for Kids helps children avoid new cognitive biases and nudges them toward positive behaviors.
Behavioral scientists already know that living in poverty decreases cognitive function, making it harder to think about the more distant future, and that social status and income affect brain development. But now neuroscientists have confirmed that poverty affects the early stages of brain development, making it almost impossible for children to close the gap with their more affluent peers. Interventions that target the brain during these early stages are more crucial than ever.
As that friend of mine would say, you may well be able to change behavior without knowing anything about the brain. But, as we will see in the next piece, you do so at your own risk.