Impossible data in a paper about dishonesty
A group of anonymous researchers have exposed fraudulent data in a widely-cited 2012 study. The 2012 study claimed that signing at the top of a document decreases dishonesty compared to signing at the end. But those findings were called into question in 2020 when the results failed to replicate. A failure to replicate in itself is not massively concerning—good science sometimes fails to replicate, and we can learn from that. It also prompted the original authors to share their data for the first time from the 2012 study. But recently, a group of anonymous researchers found that the data from study three of the original 2012 paper—odometer readings purportedly from an auto-insurance company—were impossible. Impossible how? The distributions of miles driven didn’t make sense (picture below), data was duplicated, and there were inconsistencies with rounding, something you’d expect when people report the mileage on their car. One of the things that tipped off these “data sleuths” was the use of Calibri and Cambria fonts in the dataset.
All of the study’s original authors agreed with the assessment of the anonymous researchers: the data were fabricated. What’s unclear is why and how they were fabricated, and how the data made it through the authors’ analyses and peer review. The anonymous researchers, in partnership with the data and methods blog Data Colada, provide a step-by-step explanation of how they found out the data were made up. In addition, four of the original authors also provided their perspective, linked at the end of the article. [Data Colada]
“It’s time to reimagine sample diversity and retire the WEIRD dichotomy”
In Nature Human Behaviour, Sakshi Ghai reflects on the utility of continuing to categorize psychology research as WEIRD or non-WEIRD (Western, educated, rich, industrialized, and democratic). The acronym, introduced in 2010, helped usher in an awareness about the limits of existing psychological research; that it was limited to a very specific set of participants. How much do we really know about human behavior if we only study a very narrow set of the world’s humans?
In her own research, Ghai has found the WEIRD dichotomy wanting. “The main challenge with the non-WEIRD categorization is that it obfuscates the richness of human diversity within it and masks the extent of underrepresentation,” she writes. “Are all non-WEIRD countries uniformly poor and uneducated? Do we assume that non-WEIRD nations such as Nigeria, Mexico and India are small-scale societies? Do rich and industrialized economies like South Korea and Japan count as non-WEIRD?”
In terms of how to move beyond the WEIRD dichotomy, Ghai admits she has “more questions than answers” at this point. But, for her, the push to more accurately represent the world’s population in psychology research is “about understanding whether the WEIRD lumping might be inadvertently restricting psychology’s potential to produce robust insights into human nature.” [Nature Human Behavior]
Philadelphia’s vaccine lottery did not boost vaccination rates
A team of researchers report the results from their preregistered effort to increase vaccine uptake in Philadelphia via vaccine lottery (working paper). The “Philly Vax Sweepstakes,” conducted in the spring and summer of 2021, did not significantly increase vaccination rates in the city.
The team of researchers, in collaboration with the City of Philadelphia, attempted to incentivize eligible adults to get vaccinated through a lottery with cash prizes ranging from $1,000 to $50,000. The researchers employed a “regret lottery,” meaning only those adults who had received their first vaccine before the drawing could win. Additionally, they attempted to address low vaccination rates in certain zip codes by enhancing the odds that residents from those zip codes had in the lottery. There was no discernible impact of the lotteries on vaccination rates, the authors conclude. In their discussion section, they explore why and offer advice to policymakers. [Social Science Research Network]
A quantitative history of behavioral economics: 1970s–2010s
In a new working paper, Alexandre Truc attempts a quantitative history—applying statistical tools to history—of behavioral economics over the past 40 years. Who were the central figures of the field and how did their influence change over time? How did different disciplines contribute to the evolution of behavioral economics? What were researchers focusing on and when—e.g., heuristics, social preference, behavioral finance, prospect theory?
Truc explores these questions and more by compiling a corpus of behavioral economics studies and then analyzing features like citation networks, author location, and author discipline. Through this analysis, he’s able to map how psychology and economics each contributed to the field, the evolving influence of Kahneman and Tversky’s foundational work, and the changing demographics of researchers in behavioral economics (see figure below). [Social Science Research Network]
Introducing “experimental jurisprudence”
In a recent issue of Science, Roseanna Sommers introduces an emerging field of study, “experimental jurisprudence”—the movement to examine core features of the law through an empirical lens. “Experimental jurisprudence examines how core legal concepts are understood by laypeople who know little about the law,” she explains. “Researchers then compare laypeople’s ordinary concepts against their legal counterparts.” Concepts on the radar of these researchers include: causation, consent, reasonableness, ownership, and punishment. For instance, in her research Sommers has revealed a mismatch between the legal definition of consent and how people understand and implement that definition in practice. Sommers is hopeful that experimental jurisprudence “has the potential to take the field of law and psychology beyond its limited historical role and to establish it as a more central player in contemporary jurisprudential debates.” [Science; open-access]
“Early multidisciplinary practice, not early specialization, predicts world-class performance”
“Does a focus on intensive specialized practice facilitate excellence, or is a multidisciplinary practice background better?” ask the researchers of a new meta-analysis exploring the origin of expertise in sports. Analyzing 51 studies, the researchers found that “(a) adult world-class athletes engaged in more childhood/adolescent multisport practice, started their main sport later, accumulated less main-sport practice, and initially progressed more slowly than did national-class athletes; (b) higher performing youth athletes started playing their main sport earlier, engaged in more main-sport practice but less other-sports practice, and had faster initial progress than did lower performing youth athletes.” [Perspectives on Psychological Science]
Special issue on computational social science
“The availability of big data has greatly expanded opportunities to study society and human behaviour through the prism of computational analyses. The resulting field is known as computational social science and is defined by its interdisciplinary approaches. However, this type of cross-discipline work is intrinsically challenging, calling for the development of new collaborations and toolkits. In this Nature special collection of articles, we explore some of the fundamental questions and opportunities in computational social science.”
Topics include: how to meaningfully measure human society in the digital age and what it means to conduct research in an era where our behavior is shaped by different, continuously evolving algorithms, in domains like finance, health care, and law enforcement. [Nature]