“No one has ever doubted that truth and politics are on rather bad terms with each other,” observed political theorist Hannah Arendt in 1961. The political events of the last few years haven’t exactly put truth and politics on better terms. From accusations of dishonesty during the Brexit campaign and the U.S. presidential election to ongoing concerns about “fake news” and “alternative facts,” it can be hard for citizens to separate fact from fiction.
Sometimes, though, we are confronted with clear evidence of false claims by or about political leaders. For example, President Trump’s spokespeople said his inauguration drew the biggest crowd in history, and a Time magazine correspondent tweeted that Trump had removed the bust of the Rev. Martin Luther King Jr. from the Oval Office. Photographic evidence clearly contradicted both claims.
How do we judge the unethicality of false claims like these when we know they are false? My new research—which I’ve written about in Personality and Social Psychology Bulletin (PSPB) and The New York Times—suggests the answer depends in part on how easy it is to imagine that the falsehood could have been true. Reflecting on counterfactual thoughts—propositions about what would have occurred if circumstances had been different—can soften people’s moral judgments about falsehoods.
Consider again the falsehood that Trump’s inauguration was the largest ever. When questioned about it, Trump’s spokespeople implied that the crowd would have been bigger if the weather had been nicer, or if security hadn’t been so tight. These counterfactuals, of course, don’t make the falsehood true, but they invite people to imagine how it could have been true. Does this strategy reduce how much people condemn falsehoods?
Reflecting on counterfactual thoughts can soften people’s moral judgments about falsehoods.
Intuitively, it may depend on how well the falsehood aligns with people’s political views. If you support Trump, you might find it easy to imagine a big inauguration crowd on a sunny day. As a result, reflecting on this counterfactual might make the falsehood about the actual crowd size feel closer to the truth, and thus less unethical to tell (even though you still know it’s false).
If you support Hillary Clinton, by contrast, you might struggle to imagine that the crowd would have been larger in nicer weather. As a result, reflecting on this counterfactual might not make the falsehood about the actual crowd size feel any closer to the truth or any less unethical to tell.
By contrast, the counterfactual that Trump would have removed King’s bust from the Oval Office if he could have gotten away with it should seem more plausible to Clinton supporters than to Trump supporters. As a result, reflecting on this counterfactual might lead Clinton (but not Trump) supporters to think it’s less unethical to claim he actually moved the bust.
Based on examples like these, I hypothesized that reflecting on how a political falsehood could have been true makes it seem less unethical to tell—but only if it aligns with your political views. Three experiments, with 2,783 American participants including both Clinton and Trump supporters, found support for this idea.
In the experiments, participants read statements that they were explicitly told were false. Some falsehoods aligned with Trump supporters’ politics (like the one about the inauguration size) and others aligned with Clinton supporters’ politics (like the one about Trump removing the bust). As I describe in the Times piece,
All the participants were asked to rate how unethical it was to tell the falsehoods. But half the participants were first invited to imagine how the falsehood could have been true if circumstances had been different. For example, they were asked to consider whether the inauguration would have been bigger if the weather had been nicer, or whether Mr. Trump would have removed the bust if he could have gotten away with it.
The results […] show that reflecting on how a falsehood could have been true did cause people to rate it as less unethical to tell — but only when the falsehoods seemed to confirm their political views. Trump supporters and opponents both showed this effect.
Again, the problem wasn’t that people confused fact and fiction; virtually everyone recognized the claims as false. But when a falsehood resonated with people’s politics, asking them to imagine counterfactual situations in which it could have been true softened their moral judgments. A little imagination can apparently make a lie feel “truthy” enough to give the liar a bit of a pass.
Why did the results depend on the falsehoods’ alignment with people’s political views? Additional measures suggested that when a falsehood aligned with their views, people were more inclined to embrace the idea that it could have been true. We find alternative realities we like more plausible than those we don’t.
These findings suggest a troubling flexibility in way we form moral judgments. As I explain in the PSPB article,
Even when people are motivated to excuse a falsehood, it can be difficult to convince themselves it is literally true, because facts can be checked. It is easier to convince themselves it could have been true, because counterfactuals cannot be falsified; history cannot be rerun to test what would have occurred in alternative circumstances. Thus, counterfactuals provide a degree of freedom people can exploit to make motivated moral judgments.
The results have important implications for understanding the dangers of fake news. With vivid descriptions, doctored photos, or bogus videos, fake news stories invite people to imagine how false claims could have been true. Encouraging counterfactual thinking in this way could reduce how much people condemn leaders and pundits who repeat these claims—even if people recognize and remember them as false.
Is there way to avoid this counterfactual trap? Research has yet to consider this question, but I offer some speculation in the PSPB article:
Warning about persuasion attempts and presenting weak arguments can “inoculate” against persuasion by subsequent, stronger arguments […]. Perhaps, warning about attempts to use counterfactuals to excuse dishonesty, and presenting implausible counterfactuals (e.g., Trump’s inauguration would have been bigger if Clinton had revealed she had voted for him), could similarly inoculate against the subsequent influence of more potent counterfactuals. Another strategy could be to encourage reflection on how the falsehood would not have been true even if circumstances had been different […]. For example, the proposition Trump’s inauguration would have been bigger if the weather had been better might seem less compelling when one considers whether it would have been the same size even if it had been an hour shorter.
Humans have a powerful ability to imagine what might have been. It allows us to write novels, invent new technologies, and learn from mistakes. But it also has a dark side: “When leaders we support encourage us to consider how their lies could have been true, we may hold them to laxer ethical standards.”