My (Rohan’s) dad claims that “you guys can find anything you want in your research studies.”
This claim echoes a steadily increasing cynicism regarding scientific research. Having read enough social science articles that looked like this exercise—in which anyone can reaffirm their political biases by manipulating a few variables to “prove” that the country is (statistically) significantly better off when their party is in charge—we found it hard to quibble with him.
Recently, the soaring trajectory of science skepticism seems to be rivaled only by global temperatures. Empirically established facts—around vaccines, elections, climate science, and the like—face potent headwinds. Despite the scientific consensus on these issues, much of the public remains unconvinced. In turn, science skepticism threatens our health, the health of our democracy, and the health of our planet.
The research community is no stranger to skepticism. Its own members have been questioning the integrity of many scientific findings with particular intensity of late. In response, we have seen a swell of open science norms and practices, which provide greater transparency about key procedural details of the research process, mitigating many research skeptics’ misgivings. These open practices greatly facilitate how science is communicated—but only between scientists.
Given the present historical moment’s critical need for science, we wondered: What if scientists allowed skeptics in the general public to look under the hood at how their studies were conducted? Could opening up the basic ideas of open science beyond scholars help combat the epidemic of science skepticism?
Intrigued by this possibility, we sought a qualified skeptic and returned to Rohan’s father. If we could chaperone someone through a scientific journey—a person who could vicariously experience the key steps along the way—could our openness assuage their skepticism?
Could opening up the basic ideas of open science beyond scholars help combat the epidemic of science skepticism?
Over the next couple weeks, we walked through each step of our lab’s next study with him. After explaining preregistration, we described how publicly posting our hypotheses and the analyses we would run before looking at our data served as a pact to share all of those results in any final articles we wrote, whether or not our prior hypotheses were borne out. We shared the study materials we would later show our participants. We discussed the experimental design. We talked through his questions. We even laughed at his dad jokes (“Qual-trics? Is that what you use to trick your participants?”).
To our chagrin, we then proceeded to conduct one of the messiest, ugliest data collections in lab history. The anticipated 120 student participants atrophied to 20. After random assignment, we learned that control group students saw the treatment materials in class. Exigencies of the school’s schedule wreaked havoc on the timing of the procedures and data collection.
We also explained these events to our skeptic: because conducting field experiments requires certain conditions on the ground to work, some studies fail to adequately test the stated hypotheses and thus do not produce any usable knowledge. This study was headed straight to the circular file. Our skeptic nodded his understanding.
Thinking that the contrast might be helpful, we returned to our skeptic to walk him through a more successful study from a few months earlier (the same study that sparked the “you can find anything you want” comment). Although the process and open science practices were the same (preregistration, open materials, open methods, etc.), we noted a key contrast between the studies.
For this study where our procedures held up and we had faith in the results, we would try to publish this work in an academic journal. If it passed vetting by other academics, it would enter into the scientific record (rather than the circular file). A small subset of these studies might be interesting enough to get picked up by a reporter, at which point they might land on his radar screen.
So what did he think? Did being open about the process of open science catalyze any hints of attitude change?
First and foremost, he upheld his skeptic bona fides. He still found the topics obvious—he “knew” what the results were going to be. Our study materials and survey questions clearly exposed our pro-environmental bias, which remained objectionable to him.
On the other hand, his observations about the scientific process struck a different chord. He valued seeing our preregistration, survey materials, and data because they allowed him to see what really goes into a study. As a skeptic, these tools helped him adjudicate which studies built from a strong base and which seemed baseless. He didn’t have to rely on our word that our materials passed the sniff test; he could sniff them himself. Being able to dive deeper into our studies’ processes allowed him to acknowledge that perhaps we couldn’t say anything we wanted.
In short, he did not love our choice of topics, our study materials, or the phrasing of our survey items. Fair enough—reasonable people can disagree on what questions warrant empirical investigation, which research methodology might be preferred, and which goals interventions should achieve. However, he seemed less willing to endorse his prior assertion that, as scientists, we have full freedom to write up whatever they want.
So what might open science advocates learn from this? Though far from scientific, our case study does generate a few reflections:
- Maybe the scientific community has overlooked a big market for open science practices—the lay public, especially the skeptical lay public. For scholars eager to make the world a better place through vaccines, truthful political dialogue, and empirically grounded climate policies, it seems like skeptics might be at least as important as fellow scientists. Perhaps open science practices can help start the conversation with a skeptical audience. When the members of our lab give public-facing talks, collaborate with practitioners, or speak with policymakers, we now differentiate which findings were preregistered, which are exploratory, and what the difference is. Journals and reporters can make these distinctions too. Over time, if enough researchers and media take this small extra explanatory step, maybe a growing market for connoisseurs of open science will emerge.
- While we did our share of explaining throughout this conversation, we began with a concerted effort to listen. By hearing more about our skeptic’s doubts and concerns, we learned which types of information he was hungry for. Providing him with that information in a neutral, empathic manner helped him see that we weren’t hiding anything and helped us better understand the concerns that others might have. Perhaps large funded projects could complement their advisory boards of experts with focus groups of nonexperts and even skeptics—groups who could help identify potential pain points for skeptics and which transparent practices might alleviate the concerns.
- Finally, scientists may need patience and humility about what they can actually control. Like their lay counterparts, many researchers have played the role of the skeptic. Ultimately, nobody can fully control whether a skeptic can be convinced. However, through transparent practices such as removing paywalls, sharing full study materials or research designs, publicly posting data analytic procedures before viewing the data, and so forth, we wonder if researchers might build more trust and boost the odds that skeptical readers make up their own minds and convince themselves.