Mad at Nate Silver About Election 2016? Why Behavioral Economics Suggests You Shouldn’t Be

This article was originally published on The Misbehaving Blog before it became part of the Behavioral Scientist in 2017.

For many, the aftermath of Donald Trump’s election has raised questions about the future of the United States. It has also led some to cast aspersions on analysts like Nate Silver, who used statistical models and polling data to make probabilistic predictions about the election’s outcome. Because these models consistently favored Hillary Clinton, and often heavily so (Nate Silver predicted that Clinton had a 71.4% chance of winning, and other estimates were as high as 99%), some have accused these poll analysts of ignorance or bias. Research suggests, however, that it may be the critics, and not the statheads, who are suffering from a bias — more specifically, “outcome bias,” whereby we have a tendency to overweight the outcome of a decision (or model) when assessing its ex-ante quality.

In a 2014 paper, Sticking with What (Barely) Worked, researchers at Brigham Young University examined how outcome bias affects decision making. Specifically, they explored the way that coaches in the National Basketball Association (NBA) revise their starting lineup following a game. But before we dive into the research, let’s take a step back and look at a different sport: football. Perhaps a familiar example from the US’s biggest sporting event — the Super Bowl — can best illustrate outcome bias.

Bad play calling or bad luck?

It has been called the “worst play call in NFL history.” Quick recap. It’s February 1, 2015. The New England Patriots lead the Seattle Seahawks by 4 points, with 20 seconds on the clock. But the Seahawks are driving, and are a mere 1 yard — just 36 inches — from scoring a game-winning touchdown. Everyone thinks the ball will be put in the hands of the team’s star running back, Marshawn Lynch. But rather than call a run play, the Seahawks decide to pass and the ball is intercepted by the Patriots defender Malcolm Butler. Game over.

“It should’ve been a run!” It seemed so obvious. Everyone knew it should’ve been a run. The Patriots players knew it. The coaches knew it. The 100+ million viewers around the world watching on their flat screens knew it. But did they, really? In hindsight, it’s easy to say that the Seahawks should’ve run the ball, because their decision to pass led to an interception. Surely, an unfavorable result (an interception and a loss) must have been caused by a poor decision. It just so happens that this chain of logic is wrong — and a result of outcome bias.

How is outcome bias playing a role here? The intuition is that the end result doesn’t change the initial probability that a good (or bad) outcome will occur. For all we know, an unfavorable outcome could be the result of bad luck and not bad decision making, poor strategy, or a bad model. So in the case of the Seahawks, a run might have worked, say, 90% of the time…but a pass might have worked 95% of the time!

The intuition is that the end result doesn’t change the initial probability that a good (or bad) outcome will occur.

What does the data say? ESPN’s data shows that in the past 5 years, on plays where a team is one-yard away from scoring a touchdown, teams chose to run 71% of the time and scored on a little more than half of those plays (54%). A similar analysis reveals that passes led to a touchdown 48% of the time. It’d appear then that running the ball is the smarter play, right? Not quite. The data on Marshawn Lynch reveals that in the past 5 seasons, Lynch converted on only 45% of runs from the one-yard line…and only 20% that season! The takeaway is just because the unlikely bad outcome of the pass happened, it doesn’t necessarily mean the initial choice to pass was wrong.

NBA coaches are people, too: What the data says about outcome bias in the NBA

At this point, we’ve examined how outcome bias manifested in Super Bowl XLIX, but that’s just one game, in one sport. We need to think bigger. Fortunately, the data allows it. The BYU researchers used data from over 23,000 NBA games and observed a number of behaviors that seem to be the result of outcome bias. For example, they find that coaches are more likely to change their starting lineup after a loss than a win, even after close games. The idea here is that there is little difference between a team that wins or loses by a point or two — but coaches of the winning and losing teams in these cases react in very different ways!

The authors also find that expectations play a key role. For example, if a given team is facing a stronger opponent, and is expected to lose by ten points, a rational coach should not revise their strategy if the team does indeed lose by ten. Real coaches, however, don’t think like this. They take into account both unexpected and expected performance when revising their lineups, leading to sub-optimal decision making and in particular, an overreaction to losses.

From sports to politics: One American tradition to the next

It seems reasonable to think that outcome bias may also be influencing the ex-post criticism of poll analysts in the 2016 election. The fact that Donald Trump won the election — the outcome — doesn’t actually prove that the initial predictions made by pollsters such as Nate Silver were wrong. Nate Silver didn’t say Hillary was guaranteed a win. In Silver’s model, Trump winning the presidency was a very real possibility (a roughly 30% chance — nothing to sneeze at). Indeed, Nate Silver was actually among the most cautious of the pre-election analysts in his forecast (because he correctly assumed that polling error across states might be correlated).

Outcome bias is a powerful behavioral force. So the next time you find yourself questioning a coach’s decision or a pundit’s prediction, ask yourself — is the target of my wrath really deserving of it? Or should I be looking in the mirror to see who is making the real mistake?