When the Nerves of Knowledge Send False Signals: A Conversation on Our Age of Misinformation

“The social body to which we belong is at this moment passing through one of the greatest crises of its history, a colossal process which may be likened to a birth. We have each of us a share in this process, we are to a greater or less extent responsible for its course. To make our judgments, we must have reports from other parts of the social body; we must know what our fellow-men, in all classes of society, in all parts of the world, are suffering, planning, doing. There arise emergencies which require swift decisions, under penalty of frightful waste and suffering. What if the nerves upon which we depend for knowledge of this social body should give us false reports of its condition?”

Aside from a few anachronistic expressions, the first lines of Upton Sinclair’s The Brass Check, published 100 years ago, might as well have been written today. Sinclair was lamenting the state of journalism—specifically, money and power’s undue influence and its perverse consequences on public knowledge and democracy—but in the context of today’s willful distortion of issues like climate change by industry and politicians, I draw two lessons from his assessment. First, it’s a reminder that truthful, reliable information has always been critical to the functioning of democratic society. Second, society has faced this challenge before.

A century after Sinclair, Cailin O’Connor and James Owen Weatherall, in their new book, The Misinformation Age: How False Beliefs Spread, examine society’s other bastion of information, science, and its relation to democracy. O’Connor and Weatherall answer Sinclair’s question of what happens when the “nerves upon which we depend for knowledge” are dishonest or unreliable. They also explain the how—how those nerves come to be corrupted or send false signals.

I recently spoke with O’Connor and Weatherall about misinformation’s role throughout history, its relationship to new technology, and the ways our social networks, whether we know it or not, can perpetuate misinformation, even for scientists. (They describe this phenomenon in depth in an excerpt published alongside this conversation:
How Misinformation Can Spread Among Scientists.”)

Evan Nesterak: I’ll admit that I was a little bit skeptical of the title, the idea that we are living in the misinformation age. When you think about all of the examples from the past—religion suppressing science, the sun revolving around the Earth, etc.—why is this the misinformation age?

Cailin O’Connor: One of the things that we point out in the book is that misinformation and propaganda have been around for a really long time. Governments have interests in controlling what people believe. The church has had interests like that. So misinformation isn’t new.

There are a few things that are relatively new that have resulted from changes in media structures and especially in the advent of social media. So, for example, one really big change is that now propagandists or people who would try to misinform the public can get in touch with them directly. They can communicate with them person to person on social media, and they can use social media to pose as a friend or a confidant or an acquaintance, someone who people might trust, which was much more difficult to do before we had the internet and internet communication.

James Owen Weatherall: The idea isn’t that somehow we’re in an age where there exists misinformation where there didn’t exist misinformation before. Rather, it seems striking that some recent political events in the U.S., in the U.K., in Europe have arguably been influenced by misinformation in a way that seems to distinguish them from, at least, recent memory. So it’s something about the particular way in which misinformation is spreading and persisting and influencing political decision-making that we think is distinctive of this age.

I’d like to discuss some of the historical examples that come to mind when I think of misinformation—propaganda during World War II, for instance. In your minds how would social media have affected the kind of misinformation being spread then?

JOW: The way that I like to think about it is that the history of misinformation is like the history of new media over the last roughly 500 years—you can think about it from Gutenberg to Zuckerberg. What we’ve seen is new ways of propagating ideas—the printing press, daily newspapers, radio, television, 24-hour cable, the internet—as each corresponding to new ways of spreading information, but also new ways of influencing people’s beliefs.

“The history of misinformation is like the history of new media over the last roughly 500 years—you can think about it from Gutenberg to Zuckerberg.”

I think that the rise of fascism in Europe in the 1930s is an example of a new media being used in novel ways to disseminate misinformation and propaganda. And in that case it was radio. Political leaders like Hitler and Mussolini were able to reach a much larger audience directly than was possible even a few decades before that.

Would they have been even more successful if they’d had access to social media? Perhaps. But I think that the right way of thinking about it is actually that there was a period when radio was dangerous in the same way that social media is dangerous now. And the hope is that, in the same way that it’s hard to imagine someone using radio now to launch a successful populist uprising, 30 or 50 years from now it’s going to be hard to imagine people using the equivalent of social media to do the same thing. But there are going to be presumably even more new media that are going to be new sources for this sort of thing.

Let’s discuss a misinformation strategy you describe in the book, the tobacco strategy. Can you explain what the strategy was and what it did?

COC: We have this naive view that if industry is influencing science, we think that must involve buying off a scientist—funding them and then they just start to get results that say whatever industry wants. But that’s really not what happens a lot of the time.

In the case of big tobacco, they went out and looked at studies by lots and lots of independent scientists, people who are not funded by tobacco who are doing the best studies they could, and found the ones that happened not to find a link between cigarette smoking and cancer. And of course, the way scientific evidence works, there are going to be those studies out there, because links between tobacco smoking and cancer are probabilistic. Not everyone who smokes gets cancer.

They would find these studies, and then they would publicize them widely. They’d send them to journalists, policymakers, and doctors and say things like, Look, there are seven studies here showing that tobacco smoking doesn’t cause cancer. Of course these are real studies done by real scientists. But it’s a tremendously misleading strategy, because they don’t say there were 400 other studies that did show this link.

One thing that that struck me in the book was how misinformation can spread among scientists by the nature of their social network, even if they don’t know they’re spreading misinformation. Can you explain how misinformation might spread unknowingly?

COC: This is a really big question. And, of course, it doesn’t just apply to scientists. Part of the thing that’s most distinctive about humans is that we get most of our beliefs from other people. We’re able to learn something about the world and then tell that to someone else and then they know it. This is a really powerful ability, because it allows us to have culture, to build technologies, to have medicine. But whenever you have that ability, if you can share ideas, sometimes people are going to be sharing ideas and beliefs that aren’t true. So when you open the door to information, you open a door to misinformation.

“Part of the thing that’s most distinctive about humans is that we get most of our beliefs from other people.”

When we learn something from someone else, especially a scientific belief, it’s often really difficult for us to go verify that for ourselves. If someone tells you that the Earth goes around the sun, you presumably don’t have the expertise to look at the stars and figure out whether this thing they said is true or not. You just have to trust them. So in scientific communities, as in real communities, false beliefs can spread, because we do trust other people, and we share the things we believe with them.

Can you unpack the problems with the argument around certainty—that that since we don’t know something for certain we can’t do anything about it? I can see how this argument is intuitively appealing, yet it’s essentially impossible to operate 100 percent on certainty, as you point out in the book.

COC: What we see are cases where industry interests try to prevent legislation and action by saying, Look, this is only a theory or we don’t know for sure that something’s happened or something is happening. We need to gather more evidence because we’re not certain yet. So this is used as a kind of trick to delay legislation in, for example, cases where the government wants to legislate against CFCs.

Part of the reason this is so tricky is that when it comes to scientific matters of fact, you’re never 100 percent certain about practically anything. This is an old observation from philosophy, sometimes called the problem of induction. Even something like the sun is going to come up tomorrow—are you 100 percent certain about that? Well, you think that it’s extremely likely the sun’s going to come up tomorrow, but it’s possible that between now and tomorrow, the sun explodes, or the earth is hit by an asteroid and knocked off course. So there’s always some possibility you could be wrong. The possibility of being wrong shouldn’t prevent us from acting, though.

David Hume, the philosopher, discussed this problem a lot. What he came to in the end is, Well, even if I don’t know things for sure, I just have to go about my daily life. So when it comes to matters where we need to regulate because of some danger—say, climate change or acid rain—we shouldn’t be looking for certainty, we should be looking for enough evidence to help us make a good decision about how we can best protect ourselves from a possible threat.

I want to close by asking about something you write in the conclusion: “We do think that the political situation among Western democracies suggests that the institutions that have served us well, institutions such as free and independent press, publicly funded education, scientific research, the selection of leaders and legislators via free elections, individual civil rights and liberties, may no longer be adequate to the goal of realizing democratic ideals.” Why did you come to that conclusion? And, admittedly, you say in the book that it’s controversial.

JOW: [Laughter] That could be an understatement. The way we think about it is that what we really want in a democracy is a system of governance in which our values are all represented in political decision-making—a government that is appropriately related to the people being governed. Now, that idea can be implemented in many different ways. You even see this in the U.S. In the history of the U.S., senators have been elected in different ways. It used to be that state legislature elected senators, and now we have direct election of senators. In some states you have direct democracy on ballot measures, in other states you don’t. All of these are different institutions that are designed in some way or another to realize the basic democratic ideals. But they’re different from one another. And some of them are more or less effective. And some of them are more or less vulnerable to particular efforts to co-opt the process.

“Right now we often vote as if we’re deciding matters of fact. So we vote for someone who doesn’t believe in climate change…But whether or not we voted this way doesn’t change whether it’s real.”

We think of the long list of institutions that you just described as part of a broad democratic society. But I think what we’re seeing is that first of all, it’s too easy to manipulate what people believe in such a way that although they have certain values, they want certain things, because they have false beliefs, they think the policies that would actually bring about the outcomes that they care about are not the policies that they think they want. We see that as a failure of institutions. The idea isn’t that we want to somehow get rid of democracy. But we think that it’s time to rethink the institutions that we use to realize democracy, in order to protect against threats having to do with public misinformation and manipulation of public belief.

COC: One brief way to put the problem is that right now we often vote as if we’re deciding matters of fact. So we vote for someone who doesn’t believe in climate change and say, They’re not going to act on climate change. We’ve now voted for this person, they don’t act on climate change. But whether or not we voted this way doesn’t change whether it’s real. And it doesn’t change the fact that we have all the evidence we need to conclude that it is in fact real. So when people can be misinformed by all sorts of different agents, including industry, including the Russian government, there’s a problem when people then directly vote about whether we’re going to act on threats to our democracy.

This interview has been edited for brevity and clarity.