The Challenges of Regulating AI and the Role of Behavioral Science

Laws move slow, technology moves fast. That’s one challenge when regulating AI—sometimes the damage is already done by the time lawmakers pass legislation. Like the European ban on using images scraped from the web to build facial recognition databases, a ban that was approved years after a company had already collected billions of pictures of our unsuspecting, unconsenting faces to do exactly that.

The stakes also vary wildly depending on the context where AI is deployed. For instance, how do you regulate a dual-use technology that some might use to sort handbags on an e-commerce site, others to direct military drones in a warzone? And regulation requires anticipating and countering risks—but it’s difficult to anticipate risk when we can’t possibly predict every novel situation an AI might encounter, nor how it will behave when it does. During a grisly incident in San Francisco, a self-driving car struck a pedestrian and proceeded to pull over—a reasonable response to a collision, but not when the victim is still in harm’s way. 

The versatile, unpredictable, and rapidly evolving nature of AI presents a challenge for regulators tasked with keeping us safe as the technology becomes both more sophisticated and more entrenched in our day-to-day lives. 

This is not a problem with just the machine. It’s a problem with how the machine interacts with us.

Earlier this month, the 2024 Behavioral Science & Policy Association convened a panel at their annual conference to discuss the role behavioral science can play in regulating AI. Ronnie Chatterji, professor of business and public policy at Duke University, moderated the conversation, which featured perspectives from the worlds of business, academia, and government.

The panel included Paula Goldman, ethical and humane use officer at Salesforce; Kristian Hammond, professor of computer science at Northwestern University and chief scientist at Narrative Science; and Elizabeth Kelly, director of the U.S. Artificial Intelligence Safety Institute.

The core question that Goldman, Hammond, and Kelly are grappling with in their respective domains is how to mitigate harm without hampering innovation. All three agree that behavioral science is essential to these efforts. “This is not a problem with just the machine,” Hammond said. “It’s a problem with how the machine interacts with us.” 

We’ve curated a portion of their discussion. You can watch the full discussion here.

The transcript has been edited for clarity and brevity. 


What about safety and AI keeps you up at night?

Paula Goldman, Salesforce: I spend a lot of time thinking about how to apply safety to the world of AI agents. In a world where we’re moving from generative AI—going from generating content to taking action on our behalf, to these interfaces where you can ask a million different things and a million different actions result from it—how do you know in advance, when you ask for an action, what it’s going to look like? If one prompt is going to launch a million emails, for example, how do you in advance check the quality of that? And then ex-post check the quality of that?

Kristian Hammond, Northwestern: I worry that we’re going to end up trying to solve the wrong problems. There are some really flashy AI fears, but the thing I worry about is that if we ignore the genuine reality of how this impacts individuals and groups in society, we’ll end up with, “Oh, we have regulations around transparency,” “We have regulations around explanation,” “We have a focus on being responsible,” but without actually getting into the concrete places where there are genuine harms. The fact is that there is a rise in depression among young women ages 13 to 23. There’s a rise in online addiction. We allow the production of false pornography that’s humiliating women across the country, and we’re like, “Well, we don’t know what to do.”

Let’s focus on the places where there are real harms because they are rampant. The thing I genuinely worry about is that we’ll focus on, you know, evil drones blowing people up instead of the fact that we are creating a nation of people who are being humiliated, addicted, and pushed into depression, and we’re not doing anything about it.

Elizabeth Kelly, U.S. AI Safety Institute: I totally agree with what Kris said, but I push back a little on the “not doing anything about it,” because federal agencies are pretty hard at work trying to make sure they’re addressing a lot of these harms. And there’s honestly more that Congress needs to do, and we were very clear about that. 

This is light speed for government, but it’s still slower than the technology.

The thing that keeps me up at night is just how quickly this is moving. If the technology is evolving exponentially, we have no idea what 2025 or 2026 will be. It’s hard to say these are the harms we should anticipate. And I think it’s even harder to say that we as policymakers, we as government, will be able to stay on top of it. 

We’ve seen the global community move pretty quickly for government. [The executive order] came together in a couple of months, the G7 Code of Conduct. This is light speed for government, but it’s still slower than the technology. And for all the reasons that we’ve talked about, we’ve got to stay ahead of it.

What else is on your mind? 

Paula Goldman, Salesforce: I spend a lot of time in this AI bubble. When I step out of it and talk to someone, like a friend I haven’t talked to in a long time, I hear a lot of fear, and honestly, a lot of mysticism about AI. I think it’s incumbent on all of us to break that down and to give people a mental model for how to interact with AI. How do we build that into these systems? Accounting for not only the strengths and weaknesses of where AI is right now and where it’s going, but also human strengths and human cognitive biases. And that’s, I think, where the magic is. That’s where we unlock not only avoiding harm with AI but actually using AI for good.

Kristian Hammond, Northwestern: We have to embrace the notion that this is sociotechnical, that this is not a problem with just the machine. It’s a problem with how the machine interacts with us. And that means we have to understand and admit who we are and how we’re hurt, and realize that you’re not going to solve the problem by telling people to act differently. You’re going to solve the problem by making sure the machine is built so that when people do what people do, they don’t hurt themselves.

Elizabeth Kelly, U.S. AI Safety Institute: Agreed, and that’s why my leadership team includes both an anthropologist and an ethicist. For me, the question is: How do we shift away from AI that is easily monetized, that produces a lot of the harms that Kris has talked about, to AI that is actually able to tackle a lot of our most pressing societal problems. Drug discovery and development, carbon capture and storage, education. How can we together work to shift the narrative?


Disclosure: BSPA is an organizational partner of Behavioral Scientist. Organizational partners do not play a role in the editorial decisions of the magazine.