Social Science, Ideology, Culture, & History

This article first appeared in Behavioral Scientist’s award-winning print edition, Brain Meets World.

We are, in sum, incomplete or unfinished animals who complete or finish ourselves through culture—and not through culture in general but through highly particular forms of it.

— Clifford Geertz

In 1973, almost a half century ago, distinguished psychologist Kenneth Gergen published an extremely significant and highly controversial article in the prestigious Journal of Personality and Social Psychology. The paper was called “Social Psychology as History,” and in it, Gergen suggested that the aim of modeling psychological science after the natural sciences was deeply mistaken. It was mistaken because the goal of developing timeless generalizations about your subject was the wrong goal when your subject was human nature. Human beings lived in history and in culture. As history moved and culture changed, human nature changed with it.

Whereas it is true that all the phenomena that occupy scientists occur in historical contexts (what are the big bang and the theory of evolution if not theories of natural history, after all), the historical timescale of natural historical events is vast; in contrast, the human timescale is miniscule. One of the questions Gergen raised was whether it made sense to aspire to the kinds of generalizations that physicists sought, or to use the methodological tools that chemists or molecular biologists used, to seek truths about the essentially cultural and historical phenomena that made up human nature. This idea was challenging to his colleagues, to say the least.

But Gergen added another piece to his argument that was even more challenging. If people were essentially historical and cultural creatures, psychologists had to consider one potentially significant part of that culture and history—what people like us, psychological scientists, said about them. We do our research, we develop our theories, we share those theories with colleagues by publishing them, and then what? Do they sit inertly in journals on dusty library shelves, or do they become a part of our culture and history, potentially causing people to behave differently than they did before the theories entered the world. In other words, Gergen was wondering whether our claims about human nature might actually change human nature.

Gergen was wondering whether our claims about human nature might actually change human nature.

Suppose, for example, that Stanley Milgram’s startling studies of the willingness of human beings to obey authority became widely known, as indeed they did. Would that change the way people responded to demands made on them by those in authority? This certainly seems like a plausible possibility. Some people, having been alerted to their tendency to do what they were told, might stiffen their backs and resist. Others might just resign themselves to going along to get along. It could go either way, but whichever way it went, Gergen suggested, the very fact that the Milgram study existed would change the phenomenon that Milgram was trying to understand. And Gergen voiced this concern 50 years ago, when psychologists were not writing national bestsellers and psychological findings leaked from the laboratory into the real world only rarely.

Finding things out in physics is hard. Finding things out in genetics and virology is hard. But at least physicists don’t have to worry that their theories of planetary motion will affect how planets move. And at least geneticists and virologists don’t have to worry that their theories about genetics and viruses will change how genes and viruses behave in the world. Their theories about planets, genes, and viruses are causally inert. If Gergen is right, this is not true about the theories of psychologists and other behavioral scientists. If Gergen is right, what should psychology (and the other social sciences) look like?

* * *

Science, in general, tries to answer “why” questions. It is a systematic attempt to understand and explain. Usually, to understand a domain is to be able to provide some sort of causal explanation of the phenomena in that domain. And often, a test of that causal explanation is science’s ability to predict whether similar phenomena will occur in the future, and if so, under what circumstances.

So, scientists might ask, Why do the planets move in their precise orbits around the sun? Why do boats float? Why is the earth getting warmer? Why are viruses killing people? Why do vaccines protect against viruses? Why are prices rising? Why isn’t unemployment falling? Why do young kids learn their second languages without an accent? Why do losses have a bigger impact on well-being than do comparably sized gains? Why do police react to nonwhite citizens with hostility and aggression? Why do people misremember what they have experienced? Why do people believe “facts” about the world that are patently untrue?

Science is not unique in its efforts to understand and explain. Many traditional worldviews also seek to understand, to explain, and to predict. But science is different. What makes it different? Many years ago, anthropologist Robin Horton sought to answer this question in a careful comparison of scientific thinking and traditional African thinking. He concluded that traditional worldviews were complex, comprehensive, sensitive to evidence, and aimed at finding causes in much the same way that science was. But unlike science, traditional worldviews, influential though they were, had not made much “progress” over the centuries, whereas science had transformed everything it touched. Why? Horton suggested that the “secret sauce” that science alone possessed was a tool of inquiry one did not find in other modes of thought. And that tool was the experiment.

The experiment, typically in the laboratory, allowed you to take a phenomenon from nature and transform the conditions under which it occurred. It allowed you to strip away real life, detail by detail, and observe whether the phenomenon continued to occur unchanged. And if it was changed, changed in what ways? Does fire need oxygen? Imagine a world without oxygen. Would there still be fire? Better yet, create a world without oxygen (in the laboratory) and watch the fire go out.

To do an experiment is to imagine a world that is different from the actual world and then build that world and examine what the phenomenon does in that new world. It is to be able to think about the world as it isn’t—to conjure counterfactuals and then bring them into existence. How creative that is! Imagining counterfactuals is hard enough, but imagining the right counterfactuals is harder still. After all, there are infinitely many ways to imagine the world as different from the actual one. Many of these imagined worlds would be silly to contemplate. But some will enable you to find just the causal structure you are looking for.

If experimentation is creative, then experiments are creations. Experiments invent facts, or stylize facts, rather than discovering them. What I mean is that having found that something is true in the world that you created, you need to show that it is also true in the actual world in which you live.

* * *

It took me quite some time to come to this view. I was trained in the psychology of B. F. Skinner, perhaps the major figure in mid-twentieth century psychology. Skinner was deeply committed to the view that we could understand almost everything about human behavior by understanding how behavior was affected by rewards and punishments and then analyzing the environment to see which rewards and punishments were operating (see his Science and Human Behavior).

To develop this view, Skinner invented—created—tightly controlled environments in which simple repetitive behaviors of deprived animals (typically rats or pigeons) could produce outcomes they needed (typically food or water). In these simple environments, manipulation of contingencies between behavior and outcome allowed an extraordinary degree of prediction and control of the animal’s behavior. From results obtained in settings like these, Skinner argued that he had created a perfect microcosm for understanding human behavior in the real, complex world. We could understand factory workers pressing slacks for a wage in a clothing factory by studying rats pressing levers for food in a Skinner box. This Skinnerian logic seemed compelling to me. It presented a picture of human nature that I did not find appealing, but there was no arguing with the data.

Or was there? Thanks to the patient and painstaking efforts of Richard Schuldenfrei and Hugh Lacey, two philosopher colleagues of mine at Swarthmore College, I slowly came to believe that the reason Skinner drew the conclusion that rewards and punishments were all that mattered was that he had created a world in which rewards and punishments were all that could matter. What was true inside the Skinner box might not be remotely true outside it—unless you engineered the actual world so that it became an extension of the Skinner box. What Schuldenfrei and Lacey taught me is that that is exactly what the industrial revolution, ushered in by market capitalism, had done. Consider what Adam Smith, the founding father of the economic system most of us call home, had to say in 1776:

“It is in the inherent interest of every man to live as much at his ease as he can; and if his emoluments are to be precisely the same whether he does or does not perform some very laborious duty, to perform it in as careless and slovenly a manner that authority will permit.”

In other words, people work for pay—nothing more and nothing less. Smith’s belief in the power of rewards led him to argue for organizing work by dividing labor into simple, easily repeated, essentially meaningless units, much like the lever presses of rats or the key pecks of pigeons. As long as people were getting paid for what they did, it didn’t matter very much what their jobs entailed. And by dividing labor into little bits, society would gain enormous productive efficiency. In extolling the virtues of the division of labor, Smith offered a description of a pin factory that has become famous:

“One man draws out the wire, another straits it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head . . . I have seen a small manufactory of this kind where ten men only were employed . . . They could make among them upwards of forty-eight thousand pins a day . . . But if they had all wrought separately and independently . . . they certainly could not, each of them, make twenty.”

You might ask why anyone would choose to work in Smith’s pin factory, putting heads on pins, minute after minute, hour after hour, day after day. Smith’s answer was that of course, people wouldn’t enjoy working in the pin factory. But they wouldn’t enjoy working anywhere. What Smith was telling us is that the only reason people do any kind of work is for the payoffs it produces. And as long as it produces adequate payoffs, what the work itself consists of doesn’t matter.

If Gergen is right, then at least in the social sciences, theories, rather than being beholden to facts, can shape facts in a way that strengthens the theories.

More than a century later, the same view guided Frederick Winslow Taylor, the father of what came to be called the “scientific management” movement. Taylor used meticulous time and motion studies to refine the factory, as envisioned by Smith, so that human laborers were essentially part of a well-oiled machine. And he designed compensation schemes that pushed employees to work hard, work fast, and work accurately.

Then, a generation later, along came Skinner. And Skinner created in the laboratory a world that was modeled on the world that had been created in the factory. But the factory was an invention—a creation—just as was the Skinner box. The question that was left unanswered, or with the answer presumed, was how much this invented world captured human nature in the actual world. It was at least possible that the created world had rather little to tell us about the actual world, except perhaps that the actual world could be transformed into a version of the created world.

And this brings us back to Gergen’s argument about people as cultural and historical creatures. When done right, science is an ongoing conversation between theory and data. The point of theories in science is to organize and explain the facts. Facts without organizing theories are close to useless. But theories must ultimately be accountable to the facts. And new facts force us to modify or discard inadequate theories.

That’s the ideal. But if Gergen is right, then at least in the social sciences, theories, rather than being beholden to facts, can shape facts in a way that strengthens the theories.

* * *

“If you build it, they will come.” This is the mantra that the main character in the movie Field of Dreams keeps hearing as he turns his farmland into a baseball park in the middle of nowhere. He builds it, and they do come. I want to suggest, influenced by Gergen, that at least sometimes, when social scientists build theories, the people come. That is, the people may be nudged into behaving in ways that support the theories.

In other contexts, this is familiar territory. Does the market cater to consumer desires or does it create consumer desires? Do the media cater to people’s tastes in news and entertainment or do the media create those tastes? Whenever we encounter markets creating demand, or media creating tastes, we are encountering a version of the process articulated by Gergen.

Adam Smith’s ideas about human laziness helped give shape to the industrial revolution. As economic historian Karl Polanyi pointed out in The Great Transformation, people’s conceptions of economic activity prior to the industrial revolution were very different than they were after the revolution, as was the form and manner of the work they did. Smith might have been wrong about people when he wrote The Wealth of Nations. But the monumentally influential book ushered in a set of changes in the economic and cultural landscape that made his ideas true. Said another way, Smith’s idea about human nature—his “invention” or “creation”—was a piece of technology every bit as world changing as a microchip, a search engine, or a social network.

But there is something special about this technology of ideas. Search engines and microchips don’t change the world unless they work. Ideas, in contrast, can have a major impact on human life even when they are false. Adam Smith’s conception of human beings as lazy was untrue. People want to work and they want to work well. They put effort into what they do, and take pride in excellent results (I make a an extended argument along these lines in my book Why We Work).

Yet if the only work available to people is in places like Smith’s pin factory (or a modern call center or fulfillment center), then his account of human nature starts to look true—starts to be true.

Why else would one work in a pin factory except for the wage? Why would one work carefully and diligently unless one’s wage depended on it? And why would a rat press a lever, or a person press a pair of slacks, except for the payoff it produced? Skinner created a laboratory setting in which nothing but payoffs could matter. His was a psychology of the assembly line. And if all we see around us is one or another version of the assembly line, then our behavior will come to resemble the behavior of Skinner’s rats.

Ideas about human nature that are false when they are made can become true as social institutions, like workplaces, become shaped by them.

Karl Marx used the term “false consciousness” or “ideology” to label such incorrect ideas. The term “ideology” has not been used consistently over time. The term’s history began in France in the eighteenth century, coined to denote a “science of ideas.” People whom the French emperor Napoleon termed ideologues were so in love with ideas that they ignored empirical evidence, sometimes right in front of their noses, that might contradict those ideas. A more recent manifestation of this view of people as so committed to ideas that they ignore evidence can be found in Jonathan Haidt’s The Righteous Mind, which argues that people’s moral commitments stem not from reason and reflection but from deep-seated intuitions of which they are largely unaware. That is, people use reason to make a case for what they already believe, as a lawyer does—not to tell them what they ought to believe, as a judge does.

This orientation can lead not only to ignoring evidence but to distorting it, a result of what psychologist Lee Ross called “naïve realism.” The naïve realist is someone who thinks that “I see things as they are; people who disagree with me are distorting the truth.” For Marx, ideologues weren’t just ignoring evidence to preserve their pet theories; they were distorting evidence to conform to what they already believed, or were conditioned by their circumstances to believe, or wanted to believe. And beyond distorting evidence, in line with the Gergen’s suggestion that ideas, in the form of scientific theories, can change reality, ideas about human nature that are false when they are made can become true as social institutions, like workplaces, become shaped by them. Adam Smith seemed to know this. He writes, in The Wealth of Nations:

“The man whose life is spent in a few simple operations . . . has no occasion to exert his understanding, or to exercise his invention in finding out expedients for difficulties which never occur. He naturally loses, therefore, the habit of such exertion and generally becomes as stupid and ignorant as it is possible for a human creature to be.”

The key things to notice about this statement are the words “loses” and “becomes.” Here is Smith, the father of the assumption that people are basically lazy and work only for pay, saying that work in a factory will cause people to “lose” something, and “become” something. So what is it that they had before entering the factory that they “lost”? And what is it that they were before entering the factory that was different from what they “became”? Right here in this quote we see evidence that Smith believed that what people were like as workers depended on the conditions of their work. And yet, over the years, this nuanced understanding of human nature as the product of the human environment got lost, or forgotten.

* * *

So, ideas change people. The pressing question is how. How can a technology of ideas take root, even when the ideas are false—even when they are ideology?

I think there are three basic dynamics through which false ideas can become true. The first way ideology becomes true is by reconstrual—by changing how people think about and understand their own actions. For example, someone who volunteered every week in a homeless shelter might one day read a book that tells him it is human nature to be selfish. He might then say to himself, “I thought I was acting altruistically. Now social scientists are telling me that I work in a homeless shelter for ego gratification.”

The second mechanism by which ideology becomes true is via what is called the self-fulfilling prophecy. Here, ideology changes how other people respond to the actor, which, in turn, changes what the actor does in the future. The phrase “self-fulfilling prophecy” was coined by sociologist Robert Merton in 1948. He discussed examples of how theories that initially do not describe the world accurately can become descriptive if they are acted upon. In essence, a self-fulfilling prophecy is, in the beginning, “a false definition of the situation evoking a new behavior that makes the originally false conception come true.

These two mechanisms might be expected to have only minor effects; they are changing people’s conceptions of themselves “retail,” i.e., one person at a time. Can ideology also work “wholesale,” at scale? This brings me to the final mechanism by which ideology can have an influence. When institutional structures are changed in a way that is consistent with the ideology, they can change everyone and everything. The industrialist believes that workers are only motivated to work by wages and then constructs an assembly line that reduces work to such meaningless bits that there is no reason to work aside from the wages. When social structures are shaped by ideology, ideology can change the world.

* * *

I have argued here that the social sciences are delivering to us something less than they hope. Rather than giving us timeless truths, they give us evidence of what may be true in a particular time and place, but not in all times and places. Indeed, one of the factors that may affect the generality of what the social sciences tell us is the very fact that they are telling us. (That indeed was Gergen’s point.)

One response to my argument is, “Duh! Who doesn’t know this?” Fair enough. Everyone working in the social sciences knows that what they find has boundary conditions. This limitation is so obvious that it doesn’t need to be stated. Even Newtonian mechanics has boundary conditions. But in the case of human nature and human behavior (unlike the motions of planets and falling apples), those boundary conditions may be so important and so dynamic that simply leaving them unstated and largely unexplored is leaving most of what needs to be understood invisible.

The social sciences need to be sciences where boundary conditions are at the center of the action. This is one of the things that anthropologist Clifford Geertz had in mind when he called human beings “unfinished animals.” Culture changes us, and as culture changes, our natures change as well. So the attempt to get to the bottom of human nature must include an attempt to understand the way in which people are embedded in culture and history.

You might think that living, as we are, in what some have called a “post-truth” culture, in which everything (e.g., global warming, vaccine efficacy, electoral results, the size of crowds at presidential inaugurations) is up for grabs, the last thing we need is another assault on the credibility of science. So let me be clear. Even if we are living in a post-truth world, I am not suggesting that we should be. I am not suggesting that truth is socially constructed—that you have your truths and I have mine.

What I am suggesting is that human nature is socially constructed. This is what it means to say, as Geertz does, that “culture finishes us.” The tools of science are certainly fallible (some suggest that the history of science is just one damn mistake after another), but science can find out, like no other activity in human history, what the truth is at a particular moment in a particular place. We should respect empirical evidence while at the same time asking ourselves about the limits of its applicability. And we should be ruthless in our criticism of those who blithely ignore empirical evidence, or flat-out contradict it.

Rather than giving us timeless truths, the social sciences give us evidence of what may be true in a particular time and place, but not in all times and places. Indeed, one of the factors that may affect the generality of what the social sciences tell us is the very fact that they are telling us.

We can also build into our methods an explicit appreciation of culture and history. The discipline of cultural psychology already does this, welcoming the serious study of culture into the psychological universe. The same can be done for history. Empirical evidence from history should inform the kinds of theories psychologists construct. Is there controversy and uncertainty among historians? Of course there is. But there is controversy and uncertainty among those who get their truths from the laboratory. Fallibility, not certainty. Fallibility, not nihilism.

Nobel Prize–winning economist Robert Shiller recently suggested, in his book Narrative Economics, that the best way to understand people and the institutions with which they live is historical. People make sense of the world by telling themselves stories—stories about where they were, where they are, and where they are going. These stories tend to be headed somewhere, not falling back on themselves. There are always new inputs that change the shape of existing institutions and thus change the direction of people’s stories. Society is what might be called an “open” system, in which phenomena created by social structures and practices at one point in time become causal agents that shape new practices going forward. In the present moment, for example, the explosion of social media has introduced influences on us that no previous efforts to understand human nature had to cope with. My bet is that the human world will never again be what it was even twenty years ago.

Our narrative—our history—helps us make sense of the present, even if it does not allow us to predict the future. (As some critics of economics like to point out, economists have predicted nine of the last three recessions!) Our narrative helps us to understand. Science has a prominent role to play in shaping that narrative and informing that understanding, but only if we properly understand what science can do—and what it can’t. Science at its best helps us develop the most perspicuous narratives, the most compelling understanding. But it does not offer proof. Experiments are not “facts” that must be incorporated into any account so much as demonstrations—creations—that may guide our forming narratives in ways that mere observation can’t match. But they will only play this role when they are aided by the creative and imaginative interpretations of researchers who are trying to develop the best narrative they can of the current state of human nature and human society.


This essay first appeared in Behavioral Scientist’s award-winning print edition, Brain Meets World.

Disclosure: Barry Schwartz is a member of the Behavioral Scientist’s advisory board.