The Cognitive Science of Political Thought: Practical Takeaways for Political Discourse

We recently put together a special issue entitled the “Cognitive Science of Political Thought” for the journal Cognition: The International Journal of Cognitive Science. The contributing authors, mostly psychologists, collectively examined the strange new political world that citizens of the United States and numerous other countries have suddenly found themselves in.

This special issue was more than an academic exercise. The authors’ contributions have some practical implications for politicians, policymakers, and other stakeholders, and we asked our authors to help us identify them. The suggestions we received were mostly about how to craft an effective political message but, in our view, go far beyond that. We share the practical takeaways here, summarizing psychological insights on how to improve political discourse to reduce bias and polarization.

#1: Don’t assume you know the electorate

Countless postmortems have offered accounts of the two big political surprises of 2016: the United Kingdom’s referendum on whether to leave the European Union, and the U.S. presidential election. Three papers in our special issue identified some of the incorrect assumptions made by the pundits.

In their paper, Kate Barasz, Tami Kim, and Ioannis Evangelidis demonstrate that although the following inference is often false, it remains prevalent: when a political option has an extreme feature, people assume the option’s supporters are driven by the extreme feature. For instance, if a voter chooses a candidate with a particularly extreme policy stance, observers infer that the extreme policy stance was the primary motivation behind the choice of candidate. Believing that 63 million Americans voted for Donald Trump because of—rather than in spite of—his extreme immigration policy leads to an entirely different set of conclusions about his voters, conclusions that turn out to have little basis in fact.

Believing that 63 million Americans voted for Donald Trump because of—rather than in spite of—his extreme immigration policy leads to an entirely different set of conclusions about his voters.

This assumption about voters can exacerbate the perception of political polarization. Policymakers and pundits should take care to avoid being lured into a false sense of knowledge about others’ choices and preferences. It is worth noting that polarization is, to some extent, a matter of perception. If you think others are making their choices based on extreme positions, and if you are wrong, you may also overestimate how much people disagree on important positions.

Susan Fiske has previously noted that we tend to stereotype others on two dimensions—warmth and competence. Here, she applies that insight to a sociopolitical context in order to explain polarization of a different sort. She documents that economic elites tend to be perceived as competent but cold, and that lower-income groups are perceived as incompetent but warm. This two-dimensional bias generates political resentment in both directions and turmoil within the social system: elites see lower-income groups as cheats, and lower-income groups see elites as arrogant. Whether your interest is in calming or exacerbating political divisions, acknowledging stereotyping along these dimensions is a first step to taking effective action.

Most of the papers in the special issue align themselves with the major lesson of human behavior from 2016: the human species is tribal! Most people’s political opinion is formed by whom they trust, not by evidence and arguments they encounter, an implication of the important and time-tested elaboration likelihood model. It suggests that when people do not have enough knowledge to evaluate the substance of a message, they rely more on cues such as who the messenger is, an insight that was recently applied to the evaluation of policies by Matthew Goldberg and Cheryl Carmichael. This is also the implication of Steven Sloman and Philip Fernbach’s observation that people outsource most of their political judgments.

As a result, effective political messaging will convince people about whom they should trust, not (just) try to educate them about policies. And when doing so, it is essential to respect and to show respect, because people will only respect someone who respects them.

If voters respond to in-group messengers, then group norms should also be a powerful influencer. But, as Stephanie Chen and Oleg Urminsky investigate, not everybody follows norms. Who is more likely to? Those for whom group identity is central in how they see themselves as a person. This suggests that communicating or changing norms is only part of shaping behavior. Information about norms will have an even greater impact when combined with efforts to change beliefs about how central a relevant group identity is.

One way to strengthen a group identity is to help people understand the causal connections between that group and other important aspects that people use to define themselves. Peaceniks should be told how important belonging to the group is to fostering world peace; football fans need to associate their local club with achieving football glory.

#2: What to do about extremism

A hallmark and worrisome feature of recent politics is the rise of extremism. Arie Kruglanski, Jessica Fernandez, Adam Factor, and Ewa Szumowska provide compelling evidence that the cognitive dynamics of violent extremism involve a drive for personal significance, a focal need that suppresses other needs. The resulting motivational imbalance weakens existing constraints on violent behavior.

Their model suggests that to counteract violent extremism and other forms of violence, policy efforts should aim to reduce the power of the drive to achieve personal significance through radical action, relative to the drive to pursue other needs (deradicalization). This can be (and has been) accomplished by fulfilling personal significance needs in other ways—for instance, by connecting people with family and community. In order to encourage choosing nonviolent over violent means of expression, policies should also increase the availability and instrumentality of nonviolent means to significance (disengagement). For instance, such policies might encourage people to engage in democratic grassroots movements that advance social or environmental causes.

Providing accurate information does not promote clear, nonpartisan thinking, because people use motivated cognition to deploy the fallacious reasoning that supports their beliefs.

Extreme events (by definition) are also rare. Leaf Van Boven and colleagues examine rare events such as terrorism in the Unites States. Evaluating policies to respond to such events requires clear thinking about conditional probabilities. The authors illustrate this by investigating a common stereotype. Even if it were true that most terrorist immigrants were Muslim—a high “hit rate”—the inverse conditional probability of Muslim immigrants being terrorists is extremely low. Yet this inverse conditional probability is vastly more relevant to evaluating how much restricting Muslim immigration would reduce risks from terrorism.

The problem arises with motivated cognition, another important and time-tested insight of social cognition: people judge hit rates as more important when they support politically prescribed restrictive policies. Opponents of such policies, in contrast, favor the genuinely more informative and vastly smaller inverse conditional probabilities. In other words, providing accurate information does not promote clear, nonpartisan thinking, because people use motivated cognition to deploy the fallacious reasoning that supports their beliefs. One way to minimize this bias is to get partisans to adopt or role play an unbiased expert’s perspective to reduce partisan differences in evaluating conditional probabilities.

#3. How to get people to listen to each other

Modern-day tribalism has made conversation across partisan lines difficult. Poor affective forecasting is at least partially to blame. Charles Dorison, Julia Minson, and Todd Rogers provide evidence that individuals systematically overestimate the magnitude of negative feelings that result from exposure to aversive information, i.e., to information coming from the other side. Who doesn’t think, for example, that listening to a speech by the politician one detests would put one in a bad mood?

It turns out that it has much less of an effect than people expect. If we collectively knew this and were thus more willing to listen to the other side, we would be more aware of the values we share with the other side, we would have a larger pool of shared information, and we might harbor less resentment.

It turns out that it is not hard to inform people about their poor affective forecasting. Short debiasing messages informing individuals about their error changes their willingness to consume aversive political content.

One contributing factor to the difficulty of discourse across the political divide is that we live in information bubbles, made worse by the explosion of misinformation that has come with technology, particularly in recent years with social media. Gordon Pennycook and David Rand argue that the ability to distinguish real from fake news is governed by one’s ability to reason. People fall for fake news when they fail to engage in sufficient critical thinking. Hence, to help people recognize and reject misinformation, we should teach them (or nudge them) to slow down and think critically about what they see on social media.

Broadcasters should act on the basis of the balance of evidence, rather than balance of opinion, when reporting scientific issues.

Encouraging people to slow down and think cannot be bad, especially in this age of hypersummarization and speed reading. But we don’t believe that the response to the misinformation deluge should be entirely in the attempt to improve critical thinking skills. Steven Sloman (with Nathaniel Rabb) argued we should be aware of and use norms when considering how people process information and assimilate evidence. Thus, we should establish and foster a norm of examination, encouraging a culture that considers doubt and questioning acceptable, and even encourages it. Unfortunately, metanorms about information processing in our current political and cultural environment are moving in the opposite direction.

Stephan Lewandowsky and colleagues focused on social influences on belief. They use an agent-based model of scientific consensus formation concerning climate change to show that the existence of a small group of evidence-resistant agents can prevent the public from acquiring the scientific consensus position, especially when contrarians are given disproportionate media attention. This occurs even when scientists are inspecting the data directly and in an unbiased fashion. 

The findings argue that to prevent the growth of a cadre of evidence-resistant agents, it’s valuable to provide a solid education to junior scientists who are about to enter a controversial arena. They also suggest that policy measures ought to prevent the false balance that is often observed in media coverage of scientific issues. Broadcasters should act on the basis of the balance of evidence, rather than balance of opinion, when reporting scientific issues.

#4. How to understand leaders and hold them accountable

National leaders are given places of prominence in our political attention. Indeed, some contributions to our special issue suggest that we give them too much prominence, while failing to understand what drives them.

In yet another finding of motivated cognition, Sharon Arieli, Adi Amit, and Sari Mentser found that constituents are likely to attribute the actions of ingroup leaders to the intention to benefit the country (national interests). In contrast, they attribute the actions of outgroup leaders to the intention to benefit the political leaders themselves (egoistic interests). This difference of attribution holds even when the actions are identical. Attributing the leader’s actions to national over egoistic interests is associated, in turn, with trusting and supporting the leader. The more individuals attribute the actions to a broader beneficiary, the higher the trust and support.

To get around these biases, perceivers ought to be encouraged to imagine how they would interpret the same action when performed by others. Perceivers may benefit from reconsidering political events, mentally substituting the identity of the leaders involved from outgroup to ingroup leaders and vice versa. Another suggestion is that messaging to the other side should focus on the leader’s group identity. The perceived importance of egoistic motives of specific leaders can be reduced in the face of reminders of the goals of the party or institution they represent.

Attributing a leader’s actions to national over egoistic interests is associated, in turn, with trusting and supporting the leader. The more individuals attribute the actions to a broader beneficiary, the higher the trust and support.

Sloman and Rabb discuss an intrinsic limitation of leaders: they are human, and thus relatively ignorant. But they can learn, think critically, and surround themselves with a thoughtful and qualified team. Leaders need to be held accountable to understand the consequences of their actions and of the policies they propose. It’s not good enough to loudly proclaim one’s values or goals; leaders owe it to their constituents to get into the weeds of policy. Citizens and the press need to ask questions that force politicians to justify their claims.

Political and cognitive scientists have largely reached consensus on the fact that the social and political preferences of citizens are often not well informed and are malleable. One of the current authors, Elke Weber, provides evidence of how public opinion on policies that improve public welfare (from carbon taxes to smoking bans in public places) can change from strong initial opposition to moderate positive embracing within months of their implementation.

This leads to some simple advice for politicians and policy makers: don’t govern by plebiscites or public opinion polls. The reason we elect public officials is to have people in place who have the time and will to consult experts, learn, deliberate, and think critically about issues. Public referenda are a bad idea.

This, of course, is not an argument against democracy. Leaders may be as limited as individuals, but nothing prevents them from assembling competent expert teams. Electing our leaders remains the only fair and responsible way to choose those whose decisions will determine our futures.