Annie Duke, author of Thinking in Bets, former professional poker player, and the cofounder of The Alliance for Decision Education, is out with a new book: How to Decide: Simple Tools for Making Better Choices.
I recently spoke with Annie about How to Decide. She explained to me why she argues that we should speed many of our decisions up, why paying attention only to results and not the process skews our assessments—a mistake she’s dubbed “resulting”—and the challenges of decision-making when it becomes a team game. Our conversation below has been edited for length and clarity.
Dave Nussbaum: Unlike many other books that translate research into engaging, digestible insights, this book seems particularly focused on giving people something they can actually use. For example, your book is full of exercises—what are you hoping to accomplish with that?
Annie Duke: Just like when I put out Thinking in Bets, I don’t think I’m saying anything new. I’m translating research for people who aren’t reading academic articles, or maybe who aren’t reading some of the high level popular literature. I’m trying to bring it to as wide an audience as I can without simplifying it too much.
With this book I tried to take that as far as I could, and it came from having conversations with people about Thinking in Bets. They told me that they had bought into behavioral psychology—they understood that people aren’t super-rational and making logical decisions at every turn. But what they asked me is: What do I do about that? How do I make the big decision? Or, because in Thinking in Bets I make a big deal about “resulting,” How do I not do that? So I thought, let me see if I can write something that’s really going to ground this for people and walk them through what a really good decision process looks like. It turns out it’s hard.
When you say sound decision-making is hard, what are the challenges?
There can be a lot of hand-waving. Just like with optical illusions, knowing about it doesn’t make it go away. Just saying you should pay attention to it lets you off the hook without offering a real solution. Here, I’m walking people through how to make a decision, and that turns out to be a hard thing to do. I was going through this discovery process where you think you know something really well, but when you actually try to explain it to somebody it turns out it’s not so obvious.
The book provides the reader with tools for better decision-making. Sometimes, better decision-making doesn’t mean carefully scrutinizing decisions from every possible angle but moving through decisions quickly and not getting bogged down in “analysis paralysis,” especially when there’s not all that much to be gained at the margins. Tell me a bit more about that.
One of the things I recommend in the book is that you should speed your decision-making up most of the time. When you’re spending 15 minutes on a menu, why are you doing that? Just try something.
So I take the “menu strategy” to most decisions. The idea is that the heavy lifting is in the sorting: this is an option I like; this is an option I don’t like. Where we get into a lot of trouble is that we fool ourselves into thinking that once you get down to the set of things that we’d be willing to choose, that we somehow think we can pick between them, and for almost every case we can’t.
I take the “menu strategy” to most decisions. The idea is that the heavy lifting is in the sorting.
Yet, we devote most of our time to the picking. If you do think you are good at picking, you’re probably fooling yourself. Mostly you just have to narrow down your options and then try something. That’s the point of a menu. Once you’ve eliminated the things that sound bad or you’re allergic to, you just have to try the dishes. You’ve chosen the restaurant, now pick anything off the menu that looks good.
You recently put out a paper with Cass Sunstein about “freerolls,” which is a poker term for when you have a decision with only upsides and no downside. Tell me why that’s a useful concept.
What’s interesting about a freeroll is that your information can be asymmetrical. As long as you know about the downside of a decision (and there’s zero, or very little) you don’t need to worry about the upside. The reason that’s really important is because our information often is asymmetrical.
I’ll give you an example: face masks. Masks have been used for a long time, like in surgery, and so we understand what the downside of mask wearing is: virtually none. Now that we’re applying it to a novel application, like COVID-19, we don’t immediately know what the upside is, but that’s okay because there’s no downside. If it works, then great. But if doesn’t, we haven’t lost anything.
Going back to what we discussed about picking options off a menu: When you’re picking between two good choices, that’s also essentially a freeroll, because there’s no downside.
Let’s take an example from the “before” times: you’re thinking about visiting Paris or Rome. Why do I call that kind of decision a “hidden” freeroll? It’s because you can’t go wrong whichever one you choose.
That unlocks a really important decision principle: when a decision is really hard, it means it’s easy. There are two things that can make a decision hard: one, you don’t have enough information to make the decision or you don’t fully understand your values—let’s set that aside. Two, the other way a decision can be really hard is when the options are really close, and you can’t decide between the two.
Here the decision is really easy because it doesn’t matter too much which you choose; you should just be flipping a coin. And even if you had the cognitive acuity to parse the two decisions apart, it’s kind of a waste of your time. Read a book!
Another really interesting concept that you tackle is the paradox of experience—how does that work?
It’s not very controversial for me to say that the way we become better decision makers is through experience, but the problem is that individual experiences can actually interfere with our ability to learn. So in the aggregate, it’s true—if we could sit back and look at our experiences as a whole we would learn a lot. It wouldn’t be perfect, but pretty good.
Experience is necessary for learning but it is not sufficient. Any individual experience may actually interfere with your ability to learn.
But that’s not how our minds work; we process things sequentially. We update our priors either way too much or way too little. Sometimes we have an experience that actually is informative (for example, when our expectations are violated) and we should update our priors, but we don’t, because of confirmation bias. Or, when we make a decision and we get one good outcome out of it and then decide to repeat that decision for the rest of our lives. Or we get a bad outcome and we say, “Never again.”
And that sets up the paradox of experience, which is that experience is necessary for learning but it is not sufficient. Any individual experience may actually interfere with your ability to learn. It’s only when you’re able to learn from a large set of experiences that you’re able to learn from them properly.
In our conversation, Annie and I had the chance to cover a lot of ground. There’s a final area of insight from her book that I’d like to leave you with—the idea that decision-making often isn’t a solo activity.
One essential part of making good decisions, Duke argues, is what she calls “getting the outside view.” You have to solicit feedback to get other people’s perspectives, which doesn’t sound all that groundbreaking, but the key point she makes is that it’s not actually that easy and we often do it wrong.
You have to find a way to make sure people aren’t just telling you what they think you want to hear. But even if you clear that obstacle, there’s another mistake that we make: we tell other people what we think first, which contaminates their views, so we should strive to let them answer first, quarantined from our own opinions.
You need to hold people accountable, but not for the result of their decisions, only for what they knew (or should have known) at the time.
Another issue, when decision-making becomes more of a team sport, is that when we try to make collective decisions, we often use imprecise terms. Two people using the term “pretty likely” could have two very different probabilities in mind. So when we think we’re on the same page we can sometimes be miles apart. When we state what we mean precisely we not only avoid misunderstandings but also create accountability.
At the same time, it’s important not to let accountability backfire. You need to hold people accountable, but not for the result of their decisions, only for what they knew (or should have known) at the time. After all, betting even money on an option with a 75 percent chance of success is a smart play, but you still lose 25 percent of the time. That doesn’t make it a bad decision.
But, if you’re worried that the decision “postmortem” could single you out for blame, then you have an incentive to make safe decisions that you’re unlikely to get blamed for. That translates into two tendencies that you don’t want to encourage: one is to stick with the status quo, because you’re less likely to blamed for that than for trying something new. The other is going along with the consensus, because it’s easier to avoid accountability if you’re in the same boat as everyone else than if you strike off on your own. Good decision-making is often collective, but even then some practical tools can make it better.