The 21st Century Skinner Box

This article is part of our special issue “Connected State of Mind,” which explores the impact of tech use on our behavior and relationships. View the complete issue here.

The research question seemed like a cross between science fiction and a conspiracy theory. I could hardly believe that we were asking it. But the world of our psychological forefathers had changed, and we had to change our thinking to keep up. In 2013, I worked under the supervision of Robert Epstein, B.F. Skinner’s last doctoral student, to begin a series of behavioral experiments to address an unsettling question: Could a search engine be used to sway the minds of voters?

The idea was simple. Order effects, one of the oldest and most robust effects ever discovered in the psychological sciences, can exert a powerful influence over attitudes, beliefs, and behaviors through the strategic arrangement and presentation of information. This means that you can influence someone’s wine selection, their preference for one type of medical treatment over another, and which candidate they select on a ballot based on the order in which you present them. In the digital world, controlling the presentation of information becomes even easier. By arranging information, web-based platforms can influence which hotel you book, the music you rate, and, on Facebook, the emotional language you use.

Given these findings, and a multitude of similar ones on search engines, what would happen if people were exposed to biased search rankings—search results arranged to favor one political candidate or another?

To answer these questions, Dr. Epstein and I created a mock search engine, selected a close election that subjects were likely to be unfamiliar with—the 2010 election for Prime Minister of Australia—and gathered real search results and webpages related to the two leading candidates, Julia Gillard and Tony Abbott. After crowdsourcing ratings of the webpages to determine which candidate they favored, we selected the top 15 webpages favoring each candidate and used these 30 results to craft three sets of search rankings. In one, the results were ranked in descending order by how much they favored Gillard. In the second, the results were ranked in descending order by how much they favored Abbott. And in the third, our control, the results alternated between favoring the two candidates. Subjects were randomly assigned to each group, and the same 30 results were used in all groups; only the rankings varied.

Could a search engine be used to sway the minds of voters?

In 2015, after five experiments involving over 4,500 subjects in the United States and India, we discovered that, on average, when subjects were unfamiliar with the candidates and the election (using the 2010 Australian election on U.S. subjects), we could shift the proportion of undecided voters who indicated that they would vote for a given candidate by 37.1 percent, with shifts as high as 80 percent occurring in some demographic groups (e.g., moderate republicans).

In our fifth experiment, we explored the impact of familiarity by updating our search engine with new results, webpages, and rankings and conducted our experiment on real, undecided Indian voters in the midst of India’s 2014 Lok Sabha elections, the largest democratic election in history. On average, we were able to shift votes by 20 percent or more, with shifts as high as 72.7 percent occurring in some demographic groups (e.g., unemployed males from Kerala). But the biggest shock was that 99.5 percent of subjects in the India experiment showed no awareness of our experimental manipulation—the biased search rankings. We labelled this phenomenon the Search Engine Manipulation Effect (SEME) and published our findings in the journal Proceedings of the National Academy of Sciences.

The SEME experiments illustrate how simple it can be to influence users without their awareness, even when the stakes are high. Since 2015, we’ve shown that SEME can be somewhat suppressed by alerting users to biased rankings, but the reactive suppression of SEME through alerts did not match the proactive prevention of the effect that we saw in our control group. This suggests the possible efficacy of an equal-time rule (e.g., alternating rankings).

We’ve also shown that SEME extends to other topics, like beliefs about fracking, artificial intelligence, and whether homosexuality is a choice, and other researchers have shown that the effect can influence beliefs about biofuels. In light of recent reports documenting how climate-change deniers found a way to appear at the top of Google searches and other reports documenting how both Google’s “Top Stories” and “Answer Box”—search result components that typically appear near or at the top of the search rankings—have promoted misinformation, the influential power of SEME should give everyone pause. Although search engines are one of the first places that people go to find information, it’s important to note that order effects appear on virtually any platform that ranks content, including Facebook and Reddit. But these effects only tell part of the story.

Online systems are a lot like digital Skinner boxes (video above). Also called operant conditioning chambers, Skinner boxes enabled early behavior scientists to study the principles of animal behavior in a completely controlled environment (e.g., reward and punishment mechanisms in rats and pigeons). Similarly, in online environments, web designers have absolute control over not just every stimulus available to you but virtually all your response options. Despite this, these environments are interactive and provide you with a sense of control. This leaves you open to subtle influences, like the gamification of work that keeps Uber drivers on the road longer than they would be otherwise. These environments are also often customized for each user, and like the subjects in the SEME experiments Epstein and I conducted, most people appear to be unaware of this personalization.

Unlike behavior scientists of the past, engineers and designers working at companies like Google, Amazon, Facebook, Microsoft, and Apple have enormous sample sizes to draw from, and the nature of digital environments allows them to rapidly adjust their experiments on the fly. The shape and color of the buttons you press, the timing of each notification you receive, and the content of every piece of information that reaches you have often been curated through this data-driven process of mass experimentation. And companies don’t just run these experiments once; they run them but over and over again, storing each stimulus and response, customizing reinforcers and schedules of reinforcement to maximize their influence. Over time, this enables companies to predict, shape, and condition each user’s habits and triggers on a scale previously unimaginable. In another SEME follow-up study, Epstein and I found that we could enhance the impact of biased rankings by conditioning subjects to expect that the answers to their queries would always appear at the top of their search rankings.

There’s a positive side to the online experiments that we often unwittingly participate in, though. They enable us to find information quickly, connect with like-minded people, and discover content that we’ll enjoy—saving us valuable time that we can spend elsewhere. The problem is, at least for social media and entertainment platforms, we might not be spending that time elsewhere. This may be because, in general, that’s not what these systems are designed to do. Many online systems are designed to keep us engaged, to keep the rat pushing the lever, to hold our attention and guide our clicks. That’s why you can scroll down your Facebook and Twitter news feed for hours without finding the bottom, and why the next video on YouTube and Netflix automatically starts playing after you’ve finished the current one. One of the simplest ways to increase the likelihood of a behavior is to make the behavior easier.

Many online systems are designed to keep us engaged, to keep the rat pushing the lever, to hold our attention and guide our clicks.

Every second we spend on a platform—reading a post, watching a video, rating content—becomes a new data point in your online-behavior profile, and that profile is often sold to advertisers. Data is now a commodity. The Economist dubbed it the new oil, and this is why books on designing addicting products are trending among those in the tech industry. Whether we like it or not, we’ve become a part of the attention economy.

The point of all of this is to say that social science has undergone a coming-of-age. The Internet has given rise to computational social science—the data-driven interdisciplinary field that leverages computational methods and digital trace data to answer social science questions with unprecedented scale and detail. For example, a methodologically brilliant Facebook experiment on peer encouragement was conducted in 2016 on 48.9 million unsuspecting users. In the experiment, Facebook users were prompted to like or comment on their friends’ posts more often, encouraging those friends to post more frequently. Similarly, in 2011 Facebook researchers ran a 61-million-person experiment using social influence to mobilize voters, prompting 340,000 people to go vote who otherwise wouldn’t have. This led Jonathan Zittrain, a professor of law and computer science at Harvard University, to raise concerns about the company’s capacity for digital gerrymandering.

The problem is, except for a handful of papers that are published in academic venues, this work is largely being done behind the closed doors of private corporations that have virtually no public transparency or accountability. Worse, much of this experimentation is increasingly automated through complex algorithms and artificial intelligence, and former insiders have expressed doubt that these companies can fully grasp the societal impact of their systems. With over a billion active users employing their services billions of times a day, how could they? Computational social science is certainly not without its problems, and as new artificial intelligence technologies, like the Amazon Echo and Google Home, enter the homes of millions, these difficulties are sure to grow.

European Union regulators have taken a more aggressive stance on these issues than their U.S. counterparts. This is most notably evidenced by the E.U. fining Google a record-breaking $2.7 billion for biasing search rankings to favor Alphabet (Google’s parent company) products and services; the E.U. also offered a $12 million contract for researchers to develop tools for monitoring search rankings.

In the U.S., New York City introduced the nation’s first algorithmic accountability bill. This bill targets discrimination and other issues in areas in which algorithms are being used to aid decision making. For instance, in criminal sentencing, the perils and benefits of proprietary and black-box algorithms whispering sentencing advice into a judge’s ear have been debated at length, but the evidence appears to be mixed at best. Facebook, Twitter, and Google are also under legislative pressure for their role in disseminating Russian propaganda during the 2016 presidential election.

However, these fines and legislation may fall short of solving the magnitude and immediacy of the problems they seek to address. Large corporations can easily pay fines, spend millions lobbying Congress to influence lawmakers, and regulatory legislation is generally a slow process.

It doesn’t seem like too much to ask that industry data be put to work to improve and advance our understanding of society and the human condition, that a wide and inclusive bridge be built between industry and academia.

In the legislative interim, there are a few things that might help to identify solutions, or at least alleviate the issues we’re facing. Namely, people can show support for organizations like Time Well Spent that are fighting the technology race to monetize and monopolize our attention. People can also voluntarily share their data with the algorithm auditing community of researchers, such as Volunteer Science and CivilServant, which seek to understand the dynamics and impact of technology on society through external investigations. For example, you can install a browser extension that detects and alerts you to potential price personalization—when websites like Amazon, Google Flights, or Priceline present you with prices that differ from the prices other people receive for the same product.

But better yet, and given the immense, albeit implicit, trust that the public has placed in these companies with their data—comprehensive records of their thoughts, desires, attitudes, and behavior—it doesn’t seem like too much to ask that their data be put to work to improve and advance our understanding of society and the human condition, that a wide and inclusive bridge be built between industry and academia.

Obviously, this won’t be as easy as simply releasing data. As we learned from AOL’s 2006 faux pas in releasing of users’ search logs, privacy and anonymity issues abound. Even when industry only releases the findings of internally run experiments, there’s also the risk of public backlash, as Facebook learned from their voter and emotional contagion studies. It’s also possible that such studies would not pass the Institutional Review Board, the ethics procedures that academics must obtain before conducting research involving human subjects.

However, if these technologies are legitimately threatening our democratic process—as suggested by the SEME experiments and the recent propagation of polarizing and fake information during the 2016 U.S. election—then perhaps it may be worth the time, effort, and cost to build a framework for widespread, transparent, and accountable collaboration between industry and academia. This could be big tech’s route to redemption.

Ultimately, the spark for this framework may need to come from the social scientists already inside of these companies—those who know the magnitude and immediacy of the problems that their platforms could help solve. How might the domain expertise and independent research of their academic counterparts contribute to finding solutions? With such a framework, the many academic communities working on issues like polarization and media bias could spend less time gathering minute fractions of the data that industry possesses to theorize about tech’s impact and instead spend more time working toward real-world solutions. Not only could empowering academics with a solution-oriented approach be pivotal to the advancement social science but, with global tensions rising and elections approaching in the U.S. and around the world, we’re going to need these solutions as fast as possible.

Further Reading & Resources

  • Epstein, R., & Robertson, R. E. (2015). The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. Proceedings of the National Academy of Sciences, 112(33), 4512-4521. (Link)
  • Epstein, R., Robertson, R.E., Lazer, D., & Wilson, C. (2017).  Suppressing the Search Engine Manipulation Effect (SEME). Proceedings of the ACM: Human-Computer Interaction, 1(2), Article 42. (Link)
  • Harris, T. (2017). TED: “How a handful of tech companies control billions of minds everyday.” (Link)
  • Stewart, M. G. (2014). TED: “How giant websites design for you and a billion others too.” (Link)
  • Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56-62. (Link)