Redefining Wrong in Poker, Politics, and Beyond

Annie Duke was a Ph.D. student in the psychology department at the University of Pennsylvania before her education veered in an unusual direction (at least for most psychology graduate students): She became one of the world’s top poker players. In her new book, Thinking In Bets: Making Smarter Decisions When You Don’t Have All the Facts, Duke combines her experiences at the poker table and the lessons she learned playing against the world's best players, with key insights from research in psychology. After all, what is poker if not judgment under uncertainty? In the excerpt below, Duke explains how poker teaches you to recognize that the right decision doesn’t always lead to the outcome you were hoping for, and that you should learn to get comfortable with that. — Dave Nussbaum, Managing Editor

When I attend charity poker tournaments, I will often sit in as the dealer and provide a running commentary at the final table. The atmosphere at these final tables is fun and raucous. Everyone running the event has had a long night and is breathing a sigh of relief. There is typically a big crowd around the table including friends and families of the players, rooting them on (or vocally rooting against them). If people have been drinking, then…people have been drinking. Everyone is having a good time.

When players have put all their chips in the pot, there is no more betting on the hand. After an all-in situation, the players in the hand turn their cards faceup on the table so that everyone can see them before I deal the remaining cards. This makes it fun for the audience, because they get to see each player’s position in the hand and the drama mounts. With the cards faceup, I can determine the likelihood each player will win the hand, and announce the percentage of the time each hand will win in the long run.

At one such tournament, I told the audience that one player would win 76 percent of the time and the other would win 24 percent of the time. I dealt the remaining cards, the last of which turned the 24 percent hand into the winner. Amid the cheers and groans, someone in the audience called out, “Annie, you were wrong!” 

In the same spirit that he said it, I explained that I wasn’t. “I said that would happen 24 percent of the time. That’s not zero. You got to see part of the 24 percent!”

A few hands later, almost the same thing happened. Two players put all of their chips in the pot and they turned their cards faceup. One player was 18 percent to win and the other 82 percent to win the hand. Again, the player with the worse hand when they put in their chips hit a subsequent lucky card to win the pot.

This time that same guy in the crowd called out, “Look, it was the 18 percent!” In that aha moment, he changed his definition of what it meant to be wrong. When we think in advance about the chances of alternative outcomes and make a decision based on those chances, it doesn’t automatically make us wrong when things don’t work out. It just means that one event in a set of possible futures occurred.

Look how quickly you can begin to redefine what it means to be wrong. Once we start thinking like this, it becomes easier to resist the temptation to make snap judgments after results or say things like “I knew it” or “I should have known.” Better decision-making and more self-compassion follow.

The public-at-large is often guilty of making black-and-white judgments about the “success” or “failure” of probabilistic thinking. When Great Britain voted to leave the European Union (“Brexit”) in July 2016, it was an unlikely result. Betting shops had set odds heavily favoring a vote to remain. That does not mean the betting shops had an opinion that remain would win the day. The goal of the bookmaker is to make sure the amount of money bet on either side is equal, so that the losers essentially pay the winners while the bookmaker just takes their fee. They aim to have no stake in the outcome and adjust the odds accordingly. The bookmaker’s odds reflect the market’s view, essentially our collective best guess of what is fair.

When we think in advance about the chances of alternative outcomes and make a decision based on those chances, it doesn’t automatically make us wrong when things don’t work out.

That didn’t stop even sophisticated people from resulting, declaring after the vote came in leave that the bookmakers made a mistake. The chief strategist at one Swiss bank told The Wall Street Journal, “I can’t remember any time when the bookies were so wrong.” One of America’s most famous lawyers and professors, Alan Dershowitz, made this same error. Asserting in September 2016 that the Clinton-Trump election was too difficult to make any predictions about, he said, “Think about the vote on Brexit. Virtually all the polls—including exit polls that asked voters how they voted—got it wrong. The financial markets got it wrong. The bookies got it wrong.”

Just like my spectator, Dershowitz missed the point. Any prediction that is not 0 percent or 100 percent can’t be wrong solely because the most likely future doesn’t unfold. When the 24 percent result happened at the final table of the charity tournament, that didn’t reflect inaccuracy about the probabilities as determined before that single outcome. Long shots hit some of the time. Blaming the oddsmakers or the odds themselves assumes that once something happens, it was bound to have happened and anyone who didn’t see it coming was wrong.

The same thing happened after Donald Trump won the presidency. There was a huge outcry about the polls being wrong. Nate Silver, the founder of FiveThirtyEight, drew a lot of that criticism. But he never said Clinton was a sure thing. Based on his aggregation and weighting of polling data, he had Trump between 30 percent and 40 percent to win (approximately between two-to-one and three-to-two against) in the week before the election. An event predicted to happen 30 percent to 40 percent of the time will happen a lot.

Being a poker player, I’ve played out more two-to-one shots in my tournament career than I could possibly count. A lot of those have been situations where the tournament was on the line for me. If I lose the hand, I’m out of the tournament. If I win, I earn a huge pot, maybe even winning the entire tournament. I know viscerally how likely 60–40 and 70–30 favorites are to lose (and, of course, the opposite). When people complained that Nate Silver did his job poorly because he had Clinton favored, I thought, “Those people haven’t gotten all their chips in a pot with a pair against a straight draw and lost.” Or, more likely, they’ve had those things happen throughout their lives and didn’t realize that’s what 30 percent or 40 percent feels like.

When we move away from right or wrong we start living in the continuum between the extremes. Making better decisions stops being about wrong or right but about calibrating among all the shades of grey.

Decisions are bets on the future, and they aren’t “right” or “wrong” based on whether they turn out well on any particular iteration. An unwanted result doesn’t make our decision wrong if we thought about the alternatives and probabilities in advance and allocated our resources accordingly. It would be absurd for me, after making a big bet on the best possible starting hand (a pair of aces) and losing, to spend a lot of time thinking that I was wrong to make the decision to play the hand in the first place. That would be resulting.

When we think probabilistically, we are less likely to use adverse results alone as proof that we made a decision error, because we recognize the possibility that the decision might have been good but luck and/or incomplete information (and a sample size of one) intervened.

Maybe we made the best decision from a set of unappealing choices, none of which were likely to turn out well.

Maybe we committed our resources on a long shot because the payout more than compensated for the risk, but the long shot didn’t come in this time.

Maybe we made the best choice based on the available information, but decisive information was hidden and we could not have known about it.

Maybe we chose a path with a very high likelihood of success and got unlucky.

Maybe there were other choices that might have been better and the one we made wasn’t wrong or right but somewhere in between. The second-best choice isn’t wrong. By definition, it is more right (or less wrong) than the third-best or fourth-best choice. It is like the scale at the doctor’s office: There are a lot more choices other than the extremes of obesity or anorexia. For most of our decisions, there will be a lot of space between unequivocal “right” and “wrong.”

When we move away from a world where there are only two opposing and discrete boxes that decisions can be put in—right or wrong—we start living in the continuum between the extremes. Making better decisions stops being about wrong or right but about calibrating among all the shades of grey.

Should we be willing to give up the good feeling of “right” to get rid of the anguish of “wrong”? Yes.

Redefining wrong is easiest in situations where we know the mathematical facts in advance. In the charity-tournament final-table example with the players’ cards faceup, or when I get all my chips in with the best possible starting hand, the hidden information is removed. We can make a clear calculation. If we have that unquestionably right and make an allocation of resources (a bet) on the calculation, we can more naturally get to “I wasn’t wrong just because it didn’t turn out well and I shouldn’t change my behavior.” When the chances are known, we are tethered more tightly to a rational interpretation of the influence of luck. It feels a little more like chess that way.

There is no doubt it is harder to get there when we add in hidden information on top of the influence of luck. Untethered from seeing what the coin actually looks like, we are more likely to anchor ourselves to the way things turned out as the sole signal for whether we were right or wrong. We are more likely to declare, “I told you so!” or “I should have known!” When we start doing that, compassion goes out the window.

Redefining wrong allows us to let go of all the anguish that comes from getting a bad result. But it also means we must redefine “right.” If we aren’t wrong just because things didn’t work out, then we aren’t right just because things turned out well. Do we win emotionally to making that mindset trade-off?

Being right feels really good. “I was right,” “I knew it,” “I told you so”—those are all things that we say, and they all feel very good to us. Should we be willing to give up the good feeling of “right” to get rid of the anguish of “wrong”? Yes.

A great poker player who has a good-size advantage over the other players at the table, making significantly better strategic decisions, will still be losing over 40 percent of the time at the end of eight hours of play.

First, the world is a pretty random place. The influence of luck makes it impossible to predict exactly how things will turn out, and all the hidden information makes it even worse. If we don’t change our mindset, we’re going to have to deal with being wrong a lot. It’s built into the equation.

Poker teaches that lesson. A great poker player who has a good-size advantage over the other players at the table, making significantly better strategic decisions, will still be losing over 40 percent of the time at the end of eight hours of play. That’s a whole lot of wrong. And it’s not just confined to poker.

The most successful investors in start-up companies have a majority of bad results. If you applied to NASA’s astronaut program or the NBC page program, both of which have drawn thousands of applicants for a handful of positions, things will go your way a minority of the time, but you didn’t necessarily do anything wrong. Don’t fall in love or even date anybody if you want only positive results. The world is structured to give us lots of opportunities to feel bad about being wrong if we want to measure ourselves by outcomes. Don’t fall for it!

Second, being wrong hurts us more than being right feels good. We know from Daniel Kahneman and Amos Tversky’s work on loss aversion, part of Prospect Theory (which won Kahneman the Nobel Prize in Economics in 2002), that losses in general feel about two times as bad as wins feel good. So winning $100 at blackjack feels as good to us as losing $50 feels bad to us. Because being right feels like winning and being wrong feels like losing, that means we need two favorable results for every one unfavorable result just to break even emotionally. Why not live a smoother existence, without the swings, especially when the losses affect us more intensely than the wins?

Getting comfortable with this realignment, and all the good things that follow, starts with recognizing that you’ve been betting all along.

Excerpted from Thinking in Bets by Annie Duke, Portfolio. Copyright 2018, Annie Duke. All rights reserved.