What goes into making a no-win decision? The withdrawal of U.S. troops from Afghanistan last summer is one noteworthy example. Some welcomed an end to the twenty-year war. Others warned a withdrawal would pave the way for Taliban control. Some asserted it was time for the Afghan government to stand on its own. Others argued that now wasn’t the right time. There was no end of analysis and opinion about what to do, when to do it, how to do it. But someone had to make a decision, and Biden and his administration made it.
Similarly over the past two years, policymakers at all levels of government were faced with difficult, high-stakes decisions related to COVID—locking down, closing schools, mandating vaccines. No matter the decision and no matter the outcome, you could guarantee that a large number of people weren’t happy with it. Policymakers were damned if they did and damned if they didn’t.
Watching this unfold, I became curious about decision-making mechanics when there is no immediate “good” outcome. To learn more, I reached out to Baruch Fischhoff, a professor of psychology at Carnegie Mellon University. An elected member of the United States National Academy of Sciences and National Academy of Medicine, he has been a leading contributor to judgment and decision-making research, and risk perception and analysis for the past five decades. He has also served on and led a number of government committees, including those on national security and intelligence.
In our conversation, we cover the psychological factors at play during no-win decisions, high-stakes decision-making in the context of national security, the lesson of “absorptive capacity,” and we touch on the hindsight bias bias.
Our conversation has been edited for length and clarity.
In international relations, particularly during conflict, there are situations where leaders have to make no-win decisions. There are seemingly no good options on the table. What are the factors that influence how these decisions are made?
People who are experienced perhaps do the best at creating better options. That is part of the skill of experienced decision makers. The tradeoffs that you’re going to make, really those are political, ethical decisions. Somebody who’s wise will attempt to anticipate the repercussions.
One of the things that has been a constant topic in working with people in the intelligence community and others is that experts are often very reluctant to express uncertainty or bad news. They end up hiding it, and it ends up coming back to bite them. Communicating the limits to your decision and your rationale for making it is really important. Experts do it very poorly, and they never test their messages.
They may have a story that they could tell, but they never test whether they can tell it. So, they’re sort of damned if they do and damned if they don’t. Because if they don’t tell it, it’ll blow up on them. If they tell it badly, it’ll blow up on them.
I’ve been waging a campaign for the last year and a half to get the United States government to test any message having to do with the pandemic. There are hundreds of millions of dollars spent on trying to coerce, cajole, manipulate people into getting vaccinated, and there’s nothing spent on making the critical information available to them, and to communicate it along with the uncertainties.
Communications are intertwined with the decision-making. Leaders, appropriately, need to feel that they can explain what they’re doing in a way that will work for them and show them as working for their people. And if they’re not confident that they can explain it, or they doubt the maturity of their audience, then they will not do things that they think are hard to explain.
I think that an important, even tragic, misreading of judgment and decision-making research is that people are irrational, incompetent, innumerate, hysterical, and so on, and couldn’t make a rational decision if we gave them the facts, which is, I believe, not what the research shows. I believe that the research shows that if you do a good job on communication, most people can understand most things. And the communication failures lead people not to doubt themselves, but to doubt their audience and to disrespect the public.
I think that an important, even tragic, misreading of judgment and decision-making research is that people are irrational, incompetent, innumerate, hysterical, and so on, and couldn’t make a rational decision if we gave them the facts, which is, I believe, not what the research shows.
You’ve done a lot of work on the hindsight bias—thinking an event was more predictable than it actually was. How does it manifest during high-stakes situations and what obstacles does it cause to good decision making?
It’s hard to learn unless you have a good assessment of what you’ve learned. If you think something was predictable when it wasn’t, then you’re not learning, and you’ve lost the ability to learn.
There’s also a kind of hindsight bias bias. There are decision makers who hide behind an attack by their critics by claiming their critics are guilty of hindsight bias. They say, “There’s no way I could have known.” And that may be the case, given the information that they had. But for senior decision makers, it’s often their responsibility to know. If you think about the defense after 9/11, “Oh, we couldn’t have known what was going to happen.” But it was your responsibility to know. Or with the pandemic, there are people who say, “We couldn’t have known where it was going to go”. But we could have known. The evidence was there, it was just ignored. It is your responsibility as a leader to know.
So, you have both hindsight bias that leads to unduly harsh criticism of people who’ve done the best they could, and there’s this hindsight bias bias of people who claim that they couldn’t have known when they should have.
How is decision-making different in high stakes situations from more everyday situations?
High stakes situations, by definition, are novel, so we don’t know what we’re doing, and you’re often interacting with an unfamiliar set of people. In emergency and disaster planning, they do a lot of exercises. You bring a diverse group of people together and you discover perspectives that you didn’t see otherwise. You do wargaming, whatever, but you also get to know the people, so that you know who to call. So those are psychologically sound things that you can do in preparation. The actual high stakes situation will not look like anything that you have gamed, but you will have some idea of the limits of your knowledge and where to get more.
You have both hindsight bias that leads to unduly harsh criticism of people who’ve done the best they could, and there’s this hindsight bias bias of people who claim that they couldn’t have known when they should have.
How does psychology play a role in high-stakes decision-making in the national security context?
There seem to be two approaches in national security. One, is sort of tactical, which is how do people predict specific events when they want to know how likely they are? There is a long history in intelligence, which is buttressed by psychological research, showing that vagueness in analysis undermines it. That is, if you’re not precise about what you’re predicting, and the probability of it happening, then you’re not doing your job. And there are institutional norms within the intelligence community to speak vaguely.
If you look at national intelligence estimates, which are the public summary of our 17 intelligence agencies’ views on something, they have a long epistemological prologue, which is all decision science. So, here’s the quality of evidence, here’s how confident we are on the evidence, here’s how confident we are in the prediction. But to actually give probabilities to predictions that’s been a struggle for them.
If you can get people to give probabilities, there is research going back half a century showing that people tend to think that they can make finer distinctions and levels of knowledge than they actually can. The result is that people tend to be overconfident when they’re making hard predictions and under confident when they’re making easy predictions. So, it seems that sometimes they don’t know how little they know, and sometimes don’t know how much they know. But if you get them to make the probabilities, you’re already better off because then their customers know what they’re talking about, that is the decision-makers.
The Good Judgement Project, led by Phil Tetlock and Barb Mellors, found that if you sort of throw the book at people, do everything that psychology knows how to do, then people can produce better calibrated tactical predictions. So that’s one place for psychology, I think has made a difference.
The second place where psychology has made an incremental difference, but one that would be much harder to document, is in structuring the everyday work of intelligence analysis. It goes back to the 1970s, there have been interactions between people who do judgment and decision making, started initially by Daniel Kahneman and then I sort of took over for a period of time. These are people doing the nitty gritty, and the question is, how can you structure the work so that we make these judgment calls better, organize the evidence better, and scrutinize it better.
You’ve worked on a number of government committees, including those in the Department of Homeland Security, the Environmental Protection Agency, with the Office of National Intelligence. What is something that you learned during this experience?
One thing that I’ve learned is the importance of something the economists called “absorptive capacity.” Economists who studied R&D have found that there is a positive return on investment in R&D, even for firms that never invent anything. They never have a patent, they’re never the first mover. And their explanation is that if you have people with the kind of expertise on the inside, then you’re able to see when you have problems, follow the research, and look for help.
The Food and Drug Administration has made a lot of progress in absorbing decision research, psychology, because it’s always had psychologists on staff. And the intelligence community, I think, has made progress. Through our collective effort, they have brought in behavioral people regularly since 9/11. I would say, for psychology, for our community, it’s really important that people who are trained with Ph.D.’s go to work for federal agencies, whether it’s in intelligence or not.