The Volunteer’s Dilemma

On March 27, 1964, a New York Times headline proclaimed, “37 Who Saw Murder Didn’t Call the Police,” with the subheading, “Apathy at Stabbing of Queens Woman Shocks Inspector.” “For more than half an hour,” the article began, “38 respectable, law-abiding citizens in Queens watched a killer stalk and stab a woman in three separate attacks in Kew Gardens.”

The story about the New Yorkers who “didn’t want to get involved” as they heard the blood-curdling screams of Kitty Genovese became a parable of the callousness and alienation of modern urban life. It was soon amplified by an article in Life magazine titled “The Dying Girl That No One Helped” and a book by the New York Times editor A. M. Rosenthal called Thirty-Eight Witnesses.

Some journalists never let the facts get in the way of a good morality tale, and the thirty-eighth witness who materialized between the headline and the body of the original story was just one of many humbugs in the report. As the Times, to its credit, admitted in a series of follow-ups over the decades, only six people, not thirty-seven or thirty-eight, witnessed parts of the incident, which consisted of two attacks, not three, and some thought the screams were from a quarrel between lovers or drunks. Two people did in fact call the police, and a neighbor ran out and cradled the dying Genovese in her arms until an ambulance arrived.

Still, no one can deny that people often fail to act when they know they should. Many of us have walked by a homeless person, or been awakened by a scream and turned over to fall back asleep, or shirked from refilling a coffee pot in a communal kitchen.

The phenomenon of bystander apathy has been well-documented in a set of classic studies from the golden age of social psychology, when experiments were a kind of performance art designed to raise awareness of the dangers of mindless conformity (this was before committees for the protection of research subjects put the kibosh on the genre). The psychologists John Darley and Bibb Latané suspected that people in the presence of other people might fail to respond to an obvious need not because of apathy but because of a diffusion of responsibility. Everyone assumes that someone else will step in, and that if no one does, the situation mustn’t be all that dire.

In experiments that have become a staple of the undergraduate psychology curriculum, Darley and Latané brought people into the lab to fill out questionnaires and then staged an emergency, such as a loud crash in an adjoining room followed by agonized moans, or smoke pouring out of a ventilator. If the participant was sitting with a confederate of the experimenter who continued to fill out the questionnaire as if nothing was happening, 80 percent of the time the participant did nothing too. When the participants were alone, only 30 percent failed to respond.

But pondering the game-theoretic payoffs faced by the Good Samaritans and the Don’t-Get-Involvers provides a deeper explanation than the nebulous metaphor of diffusion. The sociologist Andreas Diekmann analyzed the bystander effect as a game he called the Volunteer’s Dilemma. If someone intervenes, then each bystander enjoys a benefit, namely the reduction of distress at the thought of a person in danger. But the intervener incurs a personal cost in risk, time, and forgone opportunities to do something else. The best outcome for each bystander, then, is for someone else to intervene, and the worst is for no one to intervene, with oneself intervening falling in between. Each volunteer would step in if he was certain that no one else intended to, so he tries to discern their intentions while hiding his own. The result is an outguessing standoff, like poker players bluffing and calling, generals attacking and defending, or hockey players shooting and goaltending, each hoping to exploit a longstanding habit or momentary tell in the other.

Since predictability is fatal in an outguessing standoff, the best strategy is to roll the mental dice and act randomly, with whatever probability makes your opponent indifferent between his two choices (in game theory, choosing one of several moves with certain probabilities is called a mixed strategy). The result is a Nash equilibrium, a nervous deadlock in which neither side can do better with any other strategy. In the Volunteer’s Dilemma, the strategy is to volunteer with a probability that depends on the relative costs of no one helping, someone helping, and oneself helping—and on the number of potential volunteers. The more volunteers, the lower the odds that you have to spring into action, since it becomes likelier that someone else will spring first. The classic bystander effect—a greater number of bystanders reduces the chance that any one of them will step in—is simply what happens when rational actors in a Volunteer’s Dilemma play the most viable strategy.

Notice that the entire scenario presumes common knowledge, which is “knowledge of others’ knowledge, knowledge of their knowledge of one’s knowledge, ad infinitum.” Common knowledge is the subject of my latest book, When Everyone Knows That Everyone Knows…. And essentially, the title describes what common knowledge is—it’s how I know that you know that I know that you know that I know and so on. Common knowledge is “a distinctive cognitive state corresponding to the sense that something is public, unignorable, or ‘out there.’” It is logically different from private knowledge: learning about something in public, even if everyone already knows it, can change everything.

Learning about something in public, even if everyone already knows it, can change everything.

In the scenario above, the volunteers all know about the need to help, and know the others know. When the knowledge is asymmetrical, everything changes. To see this, imagine renting an apartment from an absentee landlord who needs one of the tenants to change the oil filter on the building’s furnace, or else the filter will clog and the whole building will be out of heat and hot water. If you were the only tenant on the premises, you would have little choice. But if other tenants were there and knew of the chore, you might hope that one of them would volunteer, and the more of them there are, the likelier one will step in.

But it’s not just the existence of other tenants that should affect your decision but your knowledge about their knowledge, and theirs about yours. Suppose the landlord has not generated common knowledge by posting a sign about the need for the filter change, or sending out a notice to everyone by email, or announcing it at an annual meeting, but instead informs the tenants one by one. As you walk down the hallway you overhear him telling another tenant about the need for someone to change the filter. If you do a quick about-face and tiptoe away before you’re noticed, you can leave your neighbor with the burden. You have second-order knowledge (you know that he knows), but he has only private knowledge, so he’s on the hook.

Better still, your freedom doesn’t depend on how many other tenants know about the need, as long as none of them think you know. With private knowledge, unlike common knowledge, responsibility needn’t diffuse as the number of potential volunteers increases.

Now consider a third level. Suppose the landlord tells you about the chore, and tells you that he’s telling the other tenants about the chore, while also mentioning to them that he told you. But he doesn’t tell the other tenants that he told you he’d be seeing them; as far as they’re concerned, you may think you’re the only one who knows. You have third-order knowledge (you know that they know that you know about the chore), but they have only second-order knowledge (they know that you know, but they don’t know you know they know). So now you’re on the hook—you have reason to believe they’ll shirk, so you have to act.

Think you can handle a fourth level? The landlord has sent just you and another tenant a notice about the chore, and you see the other tenant opening his mailbox and pulling out the same notice you got. He spots you looking at his envelope, but just before he tries to get your attention, your cell phone rings and you get absorbed in a conversation. Now you have fourth-order knowledge: you know that he knows that you know that he knows about the chore, but he’s stuck at three orders: he doesn’t know you know he knows you know. You’re back off the hook—as far as he’s concerned, you have a reason to shirk, so he has to act.

Our team wanted to see whether real people, when placed in a Volunteer’s Dilemma, go through the recursive thinking about thinking that would allow them to make the shrewdest decision about whether to volunteer or shirk. We invited internet users to pretend they were merchants in a marketplace and could earn a certain profit every day. But on some days the marketplace owner might need help from one of the merchants, who must sacrifice half his earnings to carry out the chore; if he failed to find a volunteer, he’d fine everyone their entire earnings.

As in our earlier experiments with the butcher and the baker, the crucial information (in this case, whether the owner needed help that day) might be public knowledge broadcasted over a loudspeaker, or private knowledge conveyed by a messenger to the merchant alone, or various orders of embedded knowledge about knowledge depending on what the messenger told them he was telling the others. The participants had to decide whether to volunteer and sacrifice half their earnings or shirk and take a chance at earning nothing.

And they behaved, more or less, like recursive mentalizers in a Volunteer’s Dilemma should, zigzagging in their volunteerism with the level of embedded knowledge. They volunteered when the messenger gave them private knowledge, shirked when they had second-order knowledge about the other merchants’ private knowledge, sprang back into action when they got third-order knowledge, and shirked when they got fourth-order knowledge. When the loudspeaker granted them common knowledge, their rate of volunteering fell in between, as if they mentally rolled the dice.

And in a twist that brings us back to the classic bystander effect, we compared what happened when the participants thought they were one of just two merchants and when they thought they were one of five. With common knowledge, more merchants always led to less helping, just as in the textbook experiments. But with private knowledge, and with intermediate levels of embedded knowledge, the size of the volunteer pool made no difference or a very small one. All this is what you’d expect from people trying to read the minds of other mind readers.

So as preposterous as it may seem that I could know that you know that I know that you know something, we appear to be equipped with cognitive processes that strive to do just that. We think about thoughts about thoughts, at least to some number of turtles. Most commonly, we recognize that if something is self-evident, or even salient to us, it’s likely to seem so to others. And we jump from one kind of thinking to the other, sometimes when we shouldn’t, but often when we should. As I show in the book, the fruits of this thinking drive a vast range of human affairs, including elections, game shows, economic bubbles, and, as we’ve seen, when and how we help.


Excerpted from When Everyone Knows That Everyone Knows… by Steven Pinker. Copyright © 2025 by Steven Pinker. Reprinted by permission of Scribner, an Imprint of Simon & Schuster, LLC.