How Misinformation Can Spread Among Scientists

In 1846 Ignaz Semmelweis, a Hungarian physician, took a post in the first obstetrical clinic of the Vienna General Hospital. He soon noticed a troubling pattern. The hospital’s two clinics provided free care for poor women if they were willing to be treated by students—doctors in the first clinic, where Semmelweis was stationed, and midwives in the second. But things were not going well in the first clinic.

Puerperal, or “childbed” fever, was rampant, killing 10 percent of patients on average. Meanwhile, in the second clinic, the presumably less knowledgeable midwives were losing only 3–4 percent of their patients. Even more surprising, the death rate for women who had so-called street births on the way to the hospital was much lower than for women who received the dubious help of the doctors-in training. During Semmelweis’s first year, the first clinic’s reputation was so bad that his patients literally begged on their knees to be transferred to the second.

Dismayed by their record, and horrified by the terrible deaths his patients were enduring, Semmelweis set out to find the cause of the clinic’s high fever rates. In March 1847, he had a breakthrough. A colleague died of symptoms very similar to childbed fever after receiving a small accidental cut during an autopsy. Semmelweis connected this incident with the fact that obstetricians in the first clinic regularly attended patients immediately after conducting autopsies on diseased corpses. Childbed fever, he concluded, was a result of “cadaverous particles” transferred via the student doctors’ hands. After he started requiring regular hand-washing with a chlorinated solution, the clinic’s death rate plummeted.

Despite good evidence, Ignaz Semmelweis was unable to convince “gentlemen” doctors their hands could carry disease. Source: Wikipedia

Toward the end of 1847, Semmelweis and his students published their findings in several prominent medical journals. He believed his innovation would revolutionize medical practice and save the lives of countless women. But instead, his fellow physicians—principally upper-class gentlemen—were offended by the implication that their hands were unclean, and they questioned the scientific basis of his “cadaverous particles,” which did not accord with their theories of disease. As a group, they rejected Semmelweis’s new, strange theory.  Shortly thereafter Semmelweis was replaced at the Vienna General Hospital. In his new position, at a small hospital in Budapest, his methods brought the death rate from childbed fever down to less than one percent.

Over the remaining 18 years of his life, Semmelweis’s revolutionary techniques languished. He grew increasingly frustrated with the medical establishment and eventually suffered a nervous breakdown. He was beaten by guards at a Viennese mental hospital and died of blood poisoning two weeks later, at the age of forty-two. Semmelweis was right about the connection between autopsies and puerperal fever, and the decisions he made on this basis had meaningful consequences. He saved the lives of thousands of infants and women. But his ideas could have saved many more lives if he had been able to convince others of what he knew. In this case, although he communicated his beliefs to other scientists and provided as much evidence as they could possibly desire, his ideas were still rejected, at great cost. The warrantless belief persisted that gentlemen could not communicate disease via contact.

The puzzling thing about the Semmelweis case is that the evidence was very strong. The message from the world was loud and clear: hand-washing dramatically reduces death by puerperal fever. What went wrong?

A model for seeking knowledge

In 1998 economists Venkatesh Bala and Sanjeev Goyal introduced a mathematical model to explain how individuals learn about their world, both by observing it and by listening to their neighbors. About a decade after Bala and Goyal introduced their model, the philosopher of science Kevin Zollman, now at Carnegie Mellon University, used it to represent scientists and their networks of interaction. We use the model, and variations based on it, much as Zollman did.

The basic setup of Bala and Goyal’s model is that there is a group of simple agents—highly idealized representations of scientists, or knowledge seekers—who are trying to choose between two actions and who use information gathered by themselves and by others to make this choice. The two actions are assumed to differ in how likely they are to yield a desired outcome. For a very simple example, imagine someone faced with two slot machines, trying to figure out which one pays out more often.

Although he communicated his beliefs to other scientists and provided as much evidence as they could possibly desire, his ideas were still rejected…The warrantless belief persisted that gentlemen could not communicate disease via contact.

Over a series of rounds, each scientist in the model chooses one action or the other. They make their choices on the basis of what they currently believe about the problem, and they record the results of their actions. To begin with, the scientists are not sure about which action is more likely to yield the desired outcome. But as they make their choices, they gradually see what sorts of outcomes each action yields. These outcomes are the evidence they use to update their beliefs. Importantly, each scientist develops beliefs based not only on the outcomes of their own actions, but also on those of their colleagues and friends.

In the model developed by Bala and Goyal, social ties can have a remarkable influence on how communities of scientists come to believe things.  But the original model assumed that what each scientist cares about is truth.  In some cases though, like the Semmelweis case, people are prepared to ignore good evidence to better fit in or conform with those around them.   In that case, as we saw, gentlemen in Vienna were unwilling to go outside the norm to adopt Semmelweis’s practice.

How might a feature like a bias towards conformity influence communities of physicians or scientists?

What happens when scientists conform?

The scientists in the basic Bala-Goyal model update their beliefs in light of the results of their own actions and those of others, just as before. But now suppose that when the scientists in our models choose how to act, they do so in part on the basis of what those around them do. We might suppose that they derive some payoff from agreeing with others and that this influences their decisions about which action to take, but that they also update their beliefs about the world on the basis of what they and their neighbors observe. We can imagine different scenarios—in some cases, or for some scientists, conformity might be very important, so that it heavily influences their choices. In other cases, they care more about the benefits of the better action, or prescribing a better drug, and so pay less attention to what their colleagues are doing.

In the extreme case, we can consider what happens when the only thing that scientists care about is conforming their actions to those of others—or at least, when the payoff from conforming is much larger than that from performing the better action. Under these conditions, the models predict that groups of scientists are just as likely to end up at a bad consensus as a good one. A group investigating puerperal fever is just as likely to settle on hand-washing as not. After all, if they only care about matching each other, the feedback they get from the world makes no difference at all. In this extreme case, social connections have a severe dampening effect on scientists’ ability to reach true beliefs.

Worse, once they find an action they all agree on, they will keep performing that action regardless of any new evidence. They will do this even if all the scientists come to believe something else is actually better, because no one is willing to buck the consensus. Those without peers to worry about, on the other hand, are unhampered by a desire to conform and are willing to try out a new, promising theory.

Of course, the assumption that scientists care only about conforming is too strong. People care about conformity but also about truth. What about models in which we combine the two elements? Even for these partially truth-seeking scientists, conformity makes groups of scientists worse at figuring out what is true.

First, the greater scientists’ desire to conform, the more cases there are in which some of them hold correct beliefs but do not act on them. In other words, it is entirely possible that some of Semmelweis’s peers believed that hand-washing worked but decided not to adopt it for fear of censure. In a network in which scientists share knowledge, this is especially bad. Each doctor who decided not to try hand-washing himself deprived all of his friends and colleagues of evidence about its efficacy. Conformity nips the spread of good new ideas in the bud.

Conformity can nip the spread of bad ideas in the bud, but we find that, on average, the greater their tendencies to conform, the more often a group of scientists will take the worse action.

Of course, conformity can also nip the spread of bad ideas in the bud, but we find that, on average, the greater their tendencies to conform, the more often a group of scientists will take the worse action. When they care only about performing the best action, they converge to the truth most of the time. In other words, they are pretty good at figuring out that, yes, hand-washing is better. But the more they conform, the closer we get to the case in which scientists end up at either theory completely randomly, because they do not care about the payoff differences between them. Pressures from their social realm swamp any pressures from the world.

Adding conformity to the model also creates the possibility of stable, persistent disagreement about which theory to adopt. In the Bala-Goyal base model, scientists always reach consensus, either correct or not. But now imagine a scenario in which scientists are clustered in small, tight-knit groups that are weakly connected to each other.

Tight-knit groups weakly connected

This is not such an unusual arrangement. Philosopher of science Mike Schneider points out that in many cases, scientists are closely connected to those in their own country, or who are part of their racial or ethnic group. He shows that when scientists care about conformity, these kinds of groups can be a barrier to the spread of new ideas.

Our results support this. When we have cliques of scientists, we see scenarios in which one group takes the worse action (no hand- washing) and the other takes the better action (hand-washing), but since the groups are weakly connected, conformity within each group keeps them from ever reaching a common consensus. In such a case, there are members of the non-hand-washers who know the truth—the ones who get information from the other group—but they never act on it and so never spread it to their compatriots.

We can even find networks in which everyone holds the true belief—there is a real consensus in belief—but nonetheless, a large portion of scientists perform the worse action (not washing their hands) as a result of conformist tendencies. This can happen if you have people in a tight knit cluster who all get good evidence about the benefits of hand-washing from those around them.  But, because they are closely connected to each other and prefer to conform, they are unwilling to change their practice.

So, we see that the desire to conform can seriously affect the ability of scientists, or other people gathering knowledge, to arrive at good beliefs. Worse, as philosopher Aydin Mohseni and economist Cole Williams argue, knowing about conformity can also hurt scientists’ ability to trust each other’s statements. If physicians say they are completely certain their hands do not carry cadaverous particles, it is hard to know whether they are convinced of this because of good evidence or because they are simply following the crowd.

Good evidence vs. the desire to conform

We have been thinking of “desire to conform” as the main variable in these models, but really, we should be looking at the trade-off between the desire to conform and the benefits of successful actions. In some situations the world pushes back so hard that it is nearly impossible to ignore, even when conformity is tempting. Suppose that, in our models, action B is much better than A. It pays off almost all the time, while A does so only rarely. In this sort of case, we find that agents in the models are more likely to disregard their desire to conform and instead make choices based on the best evidence available to them.

In medieval times, naturalists believed in the existence of the Vegetable Lamb of Tartary—an Indian tree bearing gourdlike fruit, within which could be found tiny lambs, complete with flesh and blood. For these naturalists, there was almost no cost to believing the wrong thing. This means that any desire to conform could swamp the costs of holding a false belief. The wise medieval thinkers who waxed poetic about how delicious the Vegetable Lamb was derived social benefits from agreeing with their fellow literati. And the world never punished them for it.

There was almost no cost to medieval scholars who believed in the mythological Vegetable Lamb. Not so for Semmelweis’ contemporaries. Source: Wikipedia

The Semmelweis case is different. There is no doubt that Semmelweis’s hand-washing practice had dramatic real-world consequences—so it might seem surprising that physicians nonetheless conformed rather than try the promising new practice. But notice that the physicians themselves were not the ones at risk of death. Neither were their friends, relatives, or members of their social circles, as the patients in their clinics were generally poor. If the consequences of their choices were more personal, they might have ignored the reputational risks of admitting that their hands were unclean and listened to Semmelweis.

The difference between cases in which beliefs really matter and in which they are more abstract can help us understand some mod- ern instances of false belief as well. When beliefs are not very important to action, they can come to take the role of a kind of social signal. They tell people what group you belong to—and help you get whatever benefits might accrue from membership in that group.

For example, an enormous body of evidence supports the idea that the biological species in our world today evolved via natural se- lection. This is the cornerstone of modern biology, and yet whether or not we accept evolution—irrespective of the evidence available—has essentially no practical consequences for most of us. On the other hand, espousing one view or the other can have significant social benefits, depending on whom we wish to conform with.

When conformity and polarization look the same

Thinking of false beliefs as social signals makes the most sense when we have cliquish networks. When two cliques settle on two different beliefs, those beliefs come to signal group membership. A man who says he does not believe in evolution tells you something not just about his beliefs but about where he comes from and whom he identifies with.

Notice that the resulting arrangement can look an awful lot like polarization: there are two (or more) groups performing different actions (and perhaps with different beliefs), neither of which listens to the other. In both cases, there is no social influence between the groups. But perhaps surprisingly, the reasons are very different.

In the conformity models, we see an outcome that, practically speaking, looks the same as polarization because everyone tries to conform with everyone else, but some people just do not interact very often.

In polarization models, social influence fails because individuals stop trusting each other when their beliefs are too different. In the conformity models, we see an outcome that, practically speaking, looks the same as polarization because everyone tries to conform with everyone else, but some people just do not interact very often.

The fact that polarization-like behavior can arise for very different reasons makes it especially hard to evaluate possible interventions. In the conformity case, disturbing people’s social networks and connecting them with different groups should help rehabilitate those with false beliefs. But when people polarize because of mistrust, such an intervention would generally fail—and it might make polarization worse. In the real world, both effects seem to be at work, in which case interventions will need to be sensitive to both explanations for false belief.

Our social networks are our best sources of new evidence and beliefs. But they also open us up to negative social effects. Semmelweis understood why women were dying, but he failed to understand how the social network of his contemporaries were thwarting his attempts at reform.

Adapted from The Misinformation Age: How False Beliefs Spread by Cailin O’Connor and James Owen Weatherall, Yale University Press. Copyright 2019, Cailin O’Connor and James Owen Weatherall. All rights reserved.