Research Lead: Immoral Machines, How to Frame Advice, Selfish Side Effects of Gift Giving, and More

You’re reading the Research Lead, a monthly digest connecting you to noteworthy academic and applied research from around the behavioral sciences.

Come together, right now—a call to protect our collective behavior

Messenger, WhatsApp, Facetime, Zoom—you’ve almost certainly used technology to keep in touch with faraway friends and family, especially over the past year. But even as technology connects us, it also creates new challenges for our “collective behavior,” which refers to how we influence and are influenced by each other. Consider social media and the speed at which political misinformation and antivaccine propaganda spread throughout the world. 

As a recent perspective article points out, many of us now get our information from websites like Twitter and Facebook and, by extension, the for-profit algorithms that run them. The authors argue that to protect ourselves from the impact these algorithms have on our collective behavior, multiple disciplines need to work together. This means, for instance, combining mathematical models with behavioral science to create safeguards to detect and deter radicalization, and to combat misinformation. It won’t be possible with just scientists, either. They will need to work with professionals from fields like law, policy, economics, and international relations to implement protections for our collective behavior. As the authors conclude, “There is no viable hands-off approach. Inaction on the part of scientists and regulators will hand the reins of our collective behavior over to a small number of individuals at for-profit companies.” [Proceedings of the National Academy of Sciences]

“Bad machines corrupt good morals”

Robots won’t be taking over the world anytime soon, but a review article poses a more pressing technological worry: the power of artificial intelligence (AI) to influence our morals. In examining the current literature on how social forces shape our behavior, researchers find that AI could corrupt our ethical behavior either by nudging us toward unethical behavior or by directly enabling it. 

The less worrying news? AI is not currently better than humans at actually changing people’s morals. But by adding a level of anonymity and social distance, AI can act as an enabler for unethical behavior by letting us psychologically dissociate from unethical acts. 

A sales team, for instance, might use AI to plan sales strategies. If the AI uses deception to reach sales goals, well, then the AI is at fault, not the sales team. In other words, we can act in our own self-interest while at the same time believing that we’ve kept our moral standards. The authors finish by outlining a research agenda aimed at improving AI oversight to protect ourselves from its potentially corrupting influence. [Nature]

An illustration of the main roles through which intelligent actors, whether human or AI, can corrupt ethical behaviour, grouped along the left panel for AI in the role of an influencer (role model and advisor) and along the right panel for AI being an enabler (partner and delegate). The main fears and mechanisms attached to each role are summarized. The colour coding indicates the strength of the corrupting force of AI: not reaching human levels yet (green), reaching but not surpassing human levels (yellow), and surpassing human levels (red). Source: Nature.

Advice framing: Encouraging people to do or not to do

Advice, requests, and directives can be presented in two ways: to do something (prescriptive), or to not do something (proscriptive). If you’ve ever cared for a young child, you might have an intuition about which of these framing tactics is more likely to get you to your desired outcome. Researchers recently tested this across five experimental studies, framing advice about excessive internet use, alcohol consumption, general health actions, and red meat intake in terms of what participants should and shouldn’t do. They found that proscription (telling someone not to do something) led to greater reactance—that is, people doing the opposite of what they’re told—than prescription across all five studies. This effect was enhanced when the message came from a more authoritative source. Want someone to do something? Consider framing it in terms of what they should do—and not what they shouldn’t. [Personality & Social Psychology Bulletin]

The selfish aftereffects of gift giving  

Gift giving is often assumed to be a net-positive force in a relationship. A new study provides some additional and intriguing nuance to that assumption: across two pre-registered studies and one field experiment, researchers found that gift giving can have negative effects, particularly on the part of the giver.

For one, online participants who imagined they gave a gift to a romantic partner tended to be more lenient when classifying which types of their own behaviors counted as cheating (e.g., sending a flirtatious text to someone other than their partner) than those who didn’t give a gift. Similar negative results were found in both an online study focused on friendship and real gift-giving in a field study, with participants more likely to make selfish decisions at their friends’ expense than non-givers.

The authors “find that gift giving, a behavior assumed to facilitate stronger relationships—by providing ‘social glue’—also taketh away.” Just like certain medicines, gift giving can be a positive force, but have some less-than-desired side effects. [Journal of Behavioral Decision Making]

The economic ladder is a lot steeper than most white Americans realize

The American Dream: anyone can move up the economic ladder if they just work hard enough. While many Americans may hold this belief, new research reveals that white Americans consistently overestimate the upwards economic mobility of Black Americans. Shai Davidai and Jesse Walker argue that this belief may stem from white Americans overestimating how much progress has been made toward racial and economic equality. In an experiment, where the authors made participants specifically aware of the current economic disparities and unique hardships Black Americans face, white participants were able to calibrate their expectations for Black economic mobility.

The misconception of economic mobility may feed into continued racism. If white Americans are overly optimistic about Black Americans’ ability to move up the economic ladder, they may unfairly blame Black Americans for their economic hardships instead of the economic system we live in. Even more damaging, this economic mobility misconception could lead to further denial about the severity of systemic racism and hinder work aimed at achieving racial equality. [Personality and Social Psychology Bulletin]

Misinformation resilience in India 

Most of the existing misinformation literature focuses on the United States and other developed countries, “where misinformation spreads via public sites such as Facebook and Twitter.” How do you combat misinformation when it’s being spread in encrypted messaging apps like WhatsApp, where no one except the sender and receiver can see or analyze messages? Misinformation campaigns spread across these private platforms have led to mob lynchings and violence in India, making this an urgent question to answer.  

According to evidence from a recent field experiment, media literacy training alone may not be very effective. More than 1,200 people received an hour-long, in-person training, after which there was no significant increase in participants’ abilities to identify misinformation. These results point to the resilience of misinformation in India. More research is needed to determine when and in what context these types of interventions work, and what other interventions could be effective against misinformation in India and other cultural contexts. [American Political Science Review; open access working paper]

Let’s get personal, personal—with nudges 

A new review paper suggests a way to make nudges, boosts, and other behavior-change interventions more effective: by taking personality into account. While nudges often focus more on context than on dispositional differences, there is an argument to be made to personalize for personality. Someone’s personality reflects how they typically react in a situation; by forming targeted interventions based on personality, they will ultimately be more effective. For example, a competitive person may be more likely to perform a behavior if it is central to a game. This same intervention will be much less effective in eliciting behavior from someone who is noncompetitive. According to the authors, “Personality should be used not only to understand for whom an intervention is likely to be effective, but also guide how we design interventions in the first place.” [European Journal of Personality]

United Nations embraces behavioral science

The United Nations Secretary-General recently released a guidance note acknowledging behavioral science as a key tool to address current and future challenges, identifying it as critical to achieving the Sustainable Development Goals. The publication encourages U.N. entities “to apply behavioural science throughout the entire process of policy-making and programming to achieve greater effectiveness and efficiency.” The brief note also identifies opportunities, describes increased capacity initiatives to support the application of behavioral science across the organization, and provides implementation resources for U.N. members. It is also accompanied by a broader report about behavioral science application at the U.N. The U.N.’s recent Behavioural Science Week featured both. Read our summary of the event. [Secretary-General’s Guidance on Behavioural Science; U.N. Behavioural Science Report]

Registered reports vs. standard publishing 

It’s well-known that it can be much harder to publish papers with null findings—meaning research that didn’t find significant relationships between the variables it was testing. Scientists have been concerned about this biased trend in publishing, which led to a push for a publication format called the registered report.

In a registered report, peer review and the decision whether or not to publish evaluates the methodology, and takes place before the research is conducted and results are known. Registered reports are being embraced as a format by multiple journals, but because they still haven’t become standard practice, a new study examined their effectiveness compared to traditional publishing. 

Anne Scheel and colleagues compared results among published registered reports with a random sample of traditional hypothesis-testing papers and found 96 percent positive results in standard research reports versus a 44 percent positive result rate for registered reports. These results suggest that standard academic incentives in publishing are preventing negative results from high-quality studies from making it into the scientific record, skewing what we know (and don’t know) about psychological findings. [Advances in Methods and Practices in Psychological Science]