Defining Research Integrity for All of Science is the Moral Responsibility of Social Psychology

This article was originally published on The Psych Report before it became part of the Behavioral Scientist in 2017.

Harvard Psychology Professor Mahazarin R. Banaji penned the following speech on research integrity only a few hours before its delivery, at the 2014 meeting of the Society for Social and Personality Psychology, Symposium on Research Integrity. She decided to write the speech below and forego her planned presentation, after hearing an earlier talk on the same topic. In her speech, Banaji explains why it is the moral responsibility of social psychology to help define research integrity for all of science. Below is her speech as she prepared it for delivery.  

Defining Research Integrity for All of Science is the Moral Responsibility of Social Psychology

More than any other science, it is our science that has shown that we, as a species, are not the beings we think we are. More than any other science, it is experiments in social psychology that have shown that in spite of genuine striving to be good and actual and conscious actions of goodness, we are also easily and capriciously capable of harm, that our objectivity is compromised, that we are easily impartial, that we do not act in ways that are consistent with our own cherished values. From research on social perception, attention, and memory to research on social attributions, judgments and decisions, we have passed peer-review not once, not a hundred times, not five thousand times but many thousands of times to demonstrate that we are not nearly as rational as the 19th century view of humans indicated. This fact, however much it is denied or ignored by other disciplines is deeply woven into the fabric of social psychology’s tapestry of knowledge.

We have been talking much about the specific practices that will help us ensure higher research integrity, which is indeed where the action lies. But I’d like to dwell at the level of abstraction for the next 15 minutes, to get us to recognize a responsibility we have as social psychologists to provide leadership on this issue for all of science. If we recognize this duty as a unique one, we will come forward with greater enthusiasm and confidence than if we believe we must undertake a dreary bunch of list checking because we are being scrutinized by some research police. And many of the hardships to change behavior will be undertake as a matter of a general moral calling rather than a response to a few bad apples.

After 50 years, the single most famous experiment in social psychology remains the demonstration that good people, ordinary people like ourselves are capable of harm of a sort that we would not only not condone in others, but would find horrifying if we were to see such behavior in ourselves.

More than any other science, it is our science that has shown that we, as a species, are not the beings we think we are.

Why did social psychologists even study such behavior as obedience to authority? Why is it this particular aspect of human nature – the disconnect between good intentions on the one hand and far less good behavior on the other – been so consistently a theme in our science these past 50 years? It seems to me that from the very beginning, the pioneers believed in an idea that the Russian playwright Anton Chekhov captured, when he said: “Man will become better when you show him who he is.” When revealed in a humble and non-patronizing way, our discoveries do have that possibility of creating an understanding about the deepest aspects of our nature, because through understanding even surprising facts about human nature, we believe that such understanding can and will change us because that’s what our big fat prefrontal cortex is for.

But it is also incumbent on us to ask how long we are willing to wait to have each Chekhovian person come to see herself accurately and become a better person. In our classrooms, the hardest time we have is trying to persuade newcomers to the field that they, not somebody in another time or place, not the person sitting at the other end of the classroom, but they themselves are likely to be prone to the errors and shortcomings that our giant amounts of data have shown and newly reveal everyday. We try out our best skills at pedagogy and persuasion to get audiences of all sorts achieve the same glimmer of understanding–that we ourselves once did (meager as it is)–that they and we must assume that we can be that average Joe in Mai Lai or Abu Ghraib, the bystander who misses seeing a crisis, the worker and manager who fails to believe they can be bought. We know how hard it is to persuade about these issues and to find ways to use the classroom to create moments that would create an awareness about the role of situations in determining outcomes.

It is in this context, with the central pillar of social psychological insight as the backdrop that I would like to speak about the issue of research integrity, which like many of you, I’ve been following with interest and have viewed as an opportunity to think newly about my own responsibilities as the principal director of a laboratory. I should say upfront that I don’t agree with all the suggestions that are being offered; I worry that even though well-intended, some of what we might put into place could have the opposite effect of stifling creativity or that a concern with badges and checking off boxes of ethical thinking will replace and even reduce the kind of reflection about research integrity that is needed every single day in all aspects of research. Imposing rules and regulations sound worthy as responses to wrongs, but there are so many ills that can also result from it. We can turn high-minded people into responders to questions that suggest that they are not trustworthy and turn them into those who seek to get past bureaucratic hurdles. This is harmful anywhere but perhaps especially for those who life’s work is to think freely without encumbrances and fear.

“Man will become better when you show him who he is.” -Anton Chekhov

In general though, I am enthusiastic about many of the discussions that I’ve been hearing about ways to improve research integrity at a quite basic level. I do believe that it is our task to be leaders on this issue, in word and deed, and to assume that role for one reason and one reason only:  because our science has given us the unique gift of understanding the constraints on our own minds; of knowing the extent to which we self-deceive, and mental gymnastics that we undertake in order behave with bias, from anchoring to xenophobia. It is for this reason that we should speak about research integrity as no other group of intellectuals must, because we have the theories and the methods to demonstrate the ways in which we can be corrupted and are corrupted, and not because we are particularly weak-willed.

Let me in the remaining few minutes emphasize just one issue about how we might effect change.

The non-trivial issues we face is what we should do. What should we do, given that we will all sit in this room, nod away, genuinely agree with the wonderful suggestions of the previous speakers here and earlier this morning.  And that will be the extent of our good behavior. Last night at dinner my friends and I discussed the many failed attempts to change research practices that have been tried in the past. So far there has been no clear sense of who is responsible for conducting the dialog and implementing changes (should it be the scientific societies? The granting agencies? The journals? Individual editors? Individual labs? Individuals?). Well motivated but haphazard attempts have been made over the years. Voices in the wilderness have suggested that we stop reporting p values and report confidence intervals or effect sizes; that we should report experiments that failed and describe exploratory work as such; that every journal should at minimum have supplementary material sections that can sit on website, and on and on and on.

But, human behavior, as anybody who studies human behavior will tell you, is hard to change in many of the situations that resemble the ones we face here. Let’s take two examples of behavioral change that were sought using similar methods, one that succeeded and one that failed.  On the question of donating organs, you and I know what separates the evil countries (where 5-25% of citizens donate their organs) versus the Mother Teresa like countries where nearly 100% donate. We know that it is not what your mother taught you that matters or the political structure of your society that determines your generosity. Rather as two psychologists so brilliantly showed, the manner in which the question about organ donation is posed, determines whether a paltry few donate or nearly all donate.

To produce low levels of donations you should say something like this: “If you wish to donate your organs, please mail this postcard in”. To produce high levels of organ donation you should say the same thing with one additional word: “If you do not wish to donate your organs, please mail this postcard in.”

The smart choice that results in the second condition emerged because of a clear-eyed observation of a simple fact: Human beings don’t mail postcards!  The majority of people are not against organ donation; they just don’t get around to doing it. And I think this type of default setting could be created for those aspects of research where there is broad consensus of the clearly right thing to do that we don’t get around to doing.  Here, it is our scientific society that should lead by encouraging journals and their editors to make good practices the default. Providing detailed supplementary materials is an example of a default that could easily accompany all submissions and can be of any length the work deserves

But I worry that the situation will still not change unless we think at least a bit further; that we use the existing evidence from our own field on changing behavior to guide how we might do this. It’s easy to say, “just set the right default” but we have another example of default setting that failed. I refer here to Mayor Bloomberg’s hope that by setting the size of the default soda cup to be less than 16 oz., NYC would consume less sugar water, making his great city even greater. I’m with mayor Bloomberg on this and so may be many of you who understand the power of default setting. Yet as you know, the measure failed, because in this case, the act of setting a default was viewed as paternalistic and patronizing, and an affront to the basic American right to a large soda. “Take your stingy hands off my pitcher of Coca Cola”, the people and the judge said.

Social Psychology must be at the forefront, because it is the science that has taught us the most about the frailty of moral and ethical decisions.

The difference between the many experiments that show success and failure to change behavior in such seemingly similar situations may be important to attend to as we discuss how to improve the integrity of research procedures, research reporting and research archiving. To what extent should improvements be required?  (already, there are many requirements that are needed to publish, so this would not be precedent setting). To what extent should the options be freely chosen with built in reputational rewards rather than a one-size requirement for all?

On the one hand, thinking about research integrity, may be more akin to the organ donation case. At base, who doesn’t think organ donation is a good idea? And likewise, who doesn’t want the quality of their research to adhere to the highest standards of ethics?  So we should, by setting defaults, be able to shape our behavior to be in line with what we ourselves want and even believe we demand of ourselves. Not so fast!

The organ donor example requires doing nothing for the prosocial outcome to emerge. Changing the standards of research integrity requires doing things and doing them differently than before. It requires constant vigilance over research procedures and taking action such as reporting, replication, and uploading data to sites for public use.  That’s the ‘mail the postcard’ condition and it’s likely to fail.

Are there examples from our past about when our behavior did change in the matter of improving research practice? Indeed there is, and it’s the creation of IRBs, which did change the nature of research and the steps required to make research possible. I have no doubt that even those of you who are frustrated with your local IRB, understand that we must have some form of oversight over our work. We now understand and agree that scientists, like mere mortals, are ego-involved in their own research and as such compromised when it comes to judging the merits of their research procedures. That to ask a scientist to accurately determine the cost to benefit ratio of their research for society is tantamount to asking a scientist to vote on which journal their paper should be published in.

The fact about IRBs is that they didn’t come about because scientists called for them.They were imposed on us by lawmakers who came to be suspicious of the potential for harm of social and biological research on human subjects. Five decades after the first IRBs were set up, it should be our hope that questions of research integrity will be debated and adjudicated by us, by members of our own community, rather than to wait for an anti-science member of congress to do it for us.

The physicist Max Planck, who is credited with being the creator of the field of quantum physics, said something about scientific truths that I believe applies also to truths about the process of doing science and scientific integrity.  He said: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

Of course the responsibility for research integrity must always rest with the director of a lab (and if I had more time today, I would have wished to focus on that responsibility).  But I think that younger generations may see more clearly that we are the descendants of great people who participated for the first time in history in experiments to empirically study the mind, its limits and capacities. They will recognize that social psychology has to be the science that must be at the forefront here, because it is the science that has taught us the most about the frailty of moral and ethical decisions.

Change will also happen because of specific acts of boldness that create places like the Center for Open Science, led by Brian Nosek.  I have never removed articles of clothing in public speaking venues.  But I will do so now, to feature this t-shirt in support of the Center for Open Science which exists for us, and leave you with the simple request:  Use It!