Contraceptives are available in Sub-Saharan Africa, but maternal deaths caused by unwanted pregnancies are still rampant. Refugee agencies support those forced to flee their homes, but don’t always know where they’ll go—or what they’ll need when they get there. AI-powered tutors provide crucial support to kids struggling in under-resourced schools, but may not treat their students equally.
These are the sorts of humanitarian challenges that featured at the seventh annual United Nations Behavioural Science Week earlier this month. Each year, the UN Behavioural Science Group brings together researchers and practitioners from inside and outside of the UN to discuss how to use behavioral science for social good. Practitioners are exposed to the latest research that could inform their work; academics glimpse how their ideas play out amid the chaos of the real world. And everyone learns about projects happening beyond their focus area. Experts in healthcare, finance, education, peace and security, and beyond share a common language—and common solutions—in behavioral science.
This year technology was a central theme. Panelists from organizations like UNICEF and the World Bank joined academic experts from behavioral science, data science, and AI to discuss how thoughtful, behaviorally-informed technologies can bolster global development and aid efforts.
I’ve curated three sessions from the week that capture the different ways this is happening. Digital assistants that boost the capacity of health care workers or teachers. Predictive models that help aid agencies send the right resources to the right regions. And just as AI can exacerbate bias, it can mitigate it too—as long as we understand how it intersects with different cultures as it’s deployed around the world.
Digital assistants add capacity in crucial situations
Sometimes there just aren’t enough teachers, doctors, or aid workers. We might have vaccines, but no one to administer them. We might have the textbooks, but no one to teach them. How can we push our resources further when manpower is finite?
Stanford economist Susan Athey showed how incorporating digital tools into aid programs can help. She presented a project that her team conducted at a hospital in Cameroon, where many women report wanting to avoid pregnancy but few use contraception. Unwanted pregnancies often result in complications that endanger both the mother and child. When Athey’s team equipped nurses at the hospital with AI-powered assistants to help structure the conversation about contraception, including personalized recommendations based on the patient’s needs and preferences, uptake of long-acting, reversible contraception methods like IUDs and implants tripled.
In Sub-Saharan Africa more broadly, the WHO estimates that 1 in 40 15-year-old girls will eventually die from a cause related to pregnancy and childbirth. Scaling interventions like Athey’s can help save those lives.
“AI has the potential to augment humans,” Athey explained. “Any place where we have a scarcity of human teachers, coaches, service providers, doctors, nurses, or any place we have a bottleneck from expensive people’s time, AI and digital technology can help make those people more effective.”
Agent-based modeling to tailor support for refugees
When refugees return home, how will aid agencies know where to direct their efforts? And how do they know what specific aid will be needed in different locations—water and sanitation in one, setting up schools in another?
Agent-based modeling is a tool to help policymakers make these sorts of decisions by simulating how people are likely to behave in different situations. Innovation officer and data scientist Rebeca Moreno Jimenez explained how one team at the UN Refugee Agency (UNHCR) uses agent-based modeling to understand how to support refugees who have fled conflict-ridden regions like Ukraine, Myanmar, and Somalia, as well as those returning home after weeks, months, or years away. (Jimenez’s talk begins at 29:30 in the video below).
The team at UNHCR starts by building a dataset that contains the key variables they think might influence the behavior they’re interested in modeling. The dataset is often a product of their own data-gathering efforts combined with data from other organizational, academic, and local sources. With the dataset in hand, they simulate how people might behave given different levels of those variables.
Finally, they compare the simulated behavior to behavior in real time—if their model doesn’t match what’s happening on the ground, they’ll incorporate new or updated information to get closer to reality. The closer they get, the more effective aid agencies can be in sending the right resources to the right places at the right time.
Currently, a team at UNHCR is working on modeling the behavior of people who are returning to Ukraine after fleeing during the war with Russia. Their dataset contains sociodemographic information about their ties to Ukraine—details like whether they have families still in the region or whether they own property. These kinds of variables, they reason, are likely to impact both the region to which refugees return and their needs once they arrive. The team has built similar models to help curb the spread of COVID-19 infections in an overcrowded refugee camp in Bangladesh, and to anticipate where internally displaced Somalis are likely to go after climate and conflict-driven evacuations.
The limits of technology that doesn’t take culture into account
What happens when new technologies are built with a limited understanding of the humans who will use them?
“If AI is to truly serve humanity, it has to be informed by behavioral sciences,” says linguist Anna Korhonen, who co-directs the Center of Human-Inspired Artificial Intelligence at the University of Cambridge. “We have all seen the limits of current AI systems . . . They struggle to understand our cultures, motivations, emotions, and social norms.” This, she argues, is why we are left with “systems that may seem technically very strong, but are actually socially misaligned.”
This misalignment is particularly glaring when technologies are applied in settings that are culturally distinct from where they were developed, like chatbots in the Global South powered by large language models trained on data from the Global North.
The Mind, Behavior and Development Unit (eMBeD) at the World Bank is working to understand what biases might be baked into these technologies. Michelle Dugas, a behavioral scientist in the unit, shared results from a project where ChatGPT was used as a tutor. When ChatGPT was prompted in English to assign practice problems to a hypothetical student, it assigned more difficult problems to female students than male students. When prompted in Hindi, they observed the opposite effect—male students received more difficult practice problems than female students.
Imagine how these kinds of issues might play out at scale: A fleet of AI tutors in India might systematically favor their male students, where girls are already far less likely to receive an education than boys. Those same AI tutors might favor their female students when deployed in the U.S., where more women than men earn college degrees.
“Without human grounding, AI can still work in many simple and low-risk settings,” Korhonen allows. “But it will often fall short in high-stakes areas like decision-making, policy, and justice, especially when we bring it into socially complex contexts.”