Scaling Nudges with Machine Learning

There is a particular tension at the heart of applied behavioral science: We make decisions based on our unique circumstances and context, and yet we also desire solutions that can help people make better decisions on a massive scale. When I talk about this conflict inside the walls of a bank (I’m head of behavioral design at Capital One), I often describe it as “math versus snowflake.”

Everyone is a snowflake. We view ourselves—our circumstances, our values, the stability of our employment, our financial identity—as completely unique. And yet, despite our uniqueness, it’s also true that there are a finite number of clear, prescriptive methods for bringing about certain behaviors—the math. If we want someone to save more, we can examine their income, expenses, debt, savings, and employments, and it becomes clear what we should prescribe.

While my qualitative research provides strong evidence that everyone is unique in important ways, I’ve had many debates with colleagues who emphasize that you “can’t outrun the math.” Both conclusions are right. Any prescribed steps a person may take to improve their financial situation must both be mathematically sound (they must lead to a desired outcome) and account for what is unique (or perceived as unique) about each person.

And therein lies the challenge: How do we nudge a snowflake to follow the math? And then, how do we sustainably nudge a million snowflakes? The answer may be machine learning.

Machine learning improves our ability to predict what person will respond to what persuasive technique, through which channel, and at which time.

At its simplest, machine learning consists of training an algorithm to find patterns in data. By identifying patterns, the algorithm predicts more accurately and then learns from those predictions to make (hopefully) even more accurate predictions. Importantly, special algorithms allow a machine to improve on its own, instead of requiring us to make conclusions about the data and then manually reprogram it. Machine learning improves our ability to predict what person will respond to what persuasive technique, through which channel, and at which time. Machine learning understands that each snowflake is unique and acts accordingly, on a massive scale.

The closer technology is to us, the more it’s about us

It’s worth thinking about why machine learning could be extremely valuable—and maybe even necessary—when nudging for good. Thanks to technology, we are moving from an age in which products and services connect us to and better manage our things (music, money, email, friends) to an age in which products and services are explicitly designed to help us achieve behavior-based goals. In other words, we are moving from the utility age to the augmentation age. Once a financial institution helps people protect, move, and access their money, it can help people behave in ways that are consonant with their wishes: staying on budget, building their credit, or saving for retirement, among many others. All of these things are more about behaviors people can do rather than the utility of money. They are about augmenting people’s ability to act in ways they know they should. They are about helping them be their best selves. I refer to this as the behavior change value proposition, where the core value exchanged isn’t the utility itself, but the outcome from behavior-based changes.

The implication is that if people invite positive influence, we need to scale our ability to influence them not just through marketing and acquisition, but through the entire product or service lifecycle, based on a value proposition of behavior change. It’s a two-part challenge: first, augment the person’s rational self by providing a decision engine that, using machine learning, finds insightful patterns in their life and prescribes the next step. Then, control a person’s irrational self: identify and target their unique combination of biases and heuristics they use to make decisions, and protect them from those biases.

Money coaches, activity trackers, and smart thermostats are obvious examples of this move from utility to augmentation, but soon that shift will permeate everything. For instance, a more recent and subtle example can be seen in a simple Google tool within Google Calendar: Google Goals. Google Calendar is one of the ultimate digital utilities, centered on managing an information object: your events. Now that the calendar lives with you on the go, aware of your events, your location, your interests, and perhaps more, it can use this information to optimize your free time to help you do the things you want to do. It has shifted from managing your events to optimizing your free time around these events to help you achieve your goals.

The closer technology is to us physically, the more it becomes about us. 

This shift from utility (managing events) to augmentation (optimizing my time to achieve goals) has the power to help people pursue something they want to do but feel they may not have the time, money, physical power, or mental energy to do. The reason for the shift is simple: The closer technology is to us physically, the more it becomes about us. 

Knowing Your Snowflake

There are a number of techniques—or behavior principles, or persuasion strategies—for influencing behavior. There’s been quite a bit of research that matches specific persuasion techniques to the individuals most likely to respond to them. Often called persuasion profiling, this research has shown that certain strategies work well for some people while other strategies work better for others.  The challenge in scaling persuasion is matching the right technique for an individual person in a population of thousands—or millions—of people.

The research on scaling and matching has two interesting finds. First, if someone does not respond to a particular technique now, they are unlikely to respond to it tomorrow. In other words, persuasion profiles are fixed. Second, choosing a specific technique is more effective than combining multiple techniques.

This makes it imperative that we get better at targeting the right nudges to the right people, in the right context. More importantly, it highlights that to truly sustain these nudges, we must create a self-optimizing system.

The Machine Learning Leap

Our ability to use machine learning to scale nudges depends on giving feedback to the machine. This is how we make the leap from a manual process of deploying persuasion to a sustainable system of dynamically deploying persuasion.

Without machine learning, we’ve employed predictive analytics for targeting marketing messages pretty consistently, but it’s still a process of digging into the data, looking at outcomes, changing the models to increase probability, and then “redeploying” the machine programming. In other words, we must constantly take best guesses on what type of nudge will have the best result, and we then determine how to best target that nudge.

If the machine can sense the outcome—specifically, if it is aware of the behavior that a person performed—it can add that as an additional training example that informs its model. The “model” is a best guess that this particular person will respond to this particular persuasion technique in this particular channel at this particular time. When it gets new outcomes, it can fine-tune its own parameters to increase the accuracy of its predictions.

This ability for a system to update the accuracy of its predictive nudging is the machine learning leap. Take, for instance, the prototypical nudge example of choice architecture: rearranging the cafeteria to nudge diners towards healthier choices. Data on the effectiveness of your choice architecture will be indirect—you may see what is being purchased and can infer that people are making healthier choices. With employee ID badges, you could even see what choices people are making on an individual level. But there’s a lot of data to collect and analyze in order to improve the cafeteria architecture.

This ability for a system to update the accuracy of its predictive nudging is the machine learning leap.

Now imagine that Amazon’s prototype of a check-out free convenience store has become a widespread reality. We can apply this combination of computer vision, data collection, and machine learning to our cafeteria. Our “choice architecture machine” can then make and update its own predictions about what is effective. It could even alter the arrangement of the cafeteria—change labels and placement of choices—and optimize itself, all under the aegis of increasing the probability of the desired outcome: healthier choices.

We typically treat our experiments as horse races. We have a population, we’ll run an intervention and see what prompt or nudge works best, and we’ll double down on the winner and scale that nudge. Do people, on average, care more about what their peers did? Or, on average, do they care more about what an authority figure thinks? Or, we throw multiple strategies at people at the same time, hoping that one of them works.

Behavioral practitioners aren’t just intervening any more—people are self-selecting to be nudged.

Practitioners and behavioral scientists have decades of research on how people behave and make decisions, but we’re only now figuring out how to practically apply this knowledge on products and services at scale in order to positively influence people. Behavioral practitioners aren’t just intervening any more—people are self-selecting to be nudged. With machine learning, we can skirt a debate in psychology about whether personality (fixed, stable traits in people) or situation (context, environments in which people find themselves) best predicts behavior. Nudging at scale with machine learning models helps detect some patterns that are person-specific and some patterns that are situation-specific so that the right balance can be struck.

Just like unlocking the human genome helped identify genetic traits that allow for personalized medical advice, we can think of machine learning as the next step in unlocking a “behavior genome.” By factoring in personality traits, situational features, and timing, we can better persuade people who want to be persuaded.

Further Reading & Resources

  • Fogg, B. (2011). Persuasive technology using computers to change what we think and do. Amsterdam: Morgan Kaufmann. (Link)
  • Guszcza, J. (2015). The last-mile problem: How data science and behavioral science can work together. Deloitte Review, (16). (Link)
  • Farley, S. (2017). Nonprofits, not Silicon Valley startups, are creating AI apps for the greater good. (Link)
  • Davenport, T. H. (2014). A Predictive Analytics Primer. Harvard Business Review. (Link)
  • Rosenfeld, A., Zukerman, I., Azaria, A., & Kraus, S. (n.d.). Combining Psychological Models with Machine Learning to Better Predict People’s Decisions. 1-28. (Link)