Once a field that rarely strayed from the lab, behavioral economics has become a means of answering all sorts of real-world questions. When do investors take financial risks? How does discrimination evolve over time? Do AI agents inherit the biases of their creators?
These particular questions are part of the wide-ranging research program of Alex Imas, a behavioral economist and professor at the University of Chicago who studies how people understand the choices they face. There, he has an office down the hall from Richard Thaler—once a distant role model who helped found Imas’ chosen field, now a friend and collaborator.
Behavioral economics was built on what Thaler called “anomalies,” defined as “empirical observations that are inconsistent with standard economic theory.” For instance, standard economic theory predicts that people will sell a bottle of wine for the same price they’d buy it for (the value shouldn’t change based on whether you already own it). It also predicts that the more money that’s inside a lost wallet, the less likely people will be to return it (higher incentives to keep it).
But in both cases, people often do the opposite. They refuse to sell a cherished bottle of wine for $200 but wouldn’t dream of buying a bottle for the same price. And the more money inside a lost wallet, the more likely people are to return it.
In 1986, Thaler began documenting anomalies in a controversial and widely read column in the Journal of Economic Perspectives. That column evolved into a 1992 book titled The Winner’s Curse, and, despite the relentless skepticism of many veteran economists, the momentum continued—resulting in a Nobel Prize for Thaler and a new field called behavioral economics. Now, 33 years later, Thaler and Imas have teamed up to write a new edition of The Winner’s Curse. Their goal was to review the original claims about anomalies and update the record with what we’ve learned since.
I recently sat down with Imas to talk about the book. How do biases discovered in the lab using pens and mugs play out in housing markets and stock exchanges? Has behavioral economics been used more often to exploit people than to empower them? And a big question that seems simple but isn’t: What makes a choice difficult?
Our conversation has been edited for length and clarity.
Heather Graci: This book is a then-and-now look at behavioral economics. After examining the last 30 years of research in the field, what change stands out to you the most?
Alex Imas: The biggest change in behavioral economics has been the move [from the lab] to the field, I think. A lot of the original anomalies that were documented in the “Anomalies” columns and then put together in the original Winner’s Curse were lab experiments. There are a couple of examples of using stock market data and things like that, but primarily, things like prospect theory and the endowment effect came from students making decisions with relatively low stakes, sometimes with no stakes at all. A lot of the pushback against these sorts of studies by economists was, “We actually do not care at all about these students. We care about market participants making consequential decisions.”
The reason that behavioral economics, in our view, has become so successful in economics more broadly is because of this move to the field.
The reason that behavioral economics, in our view, has become so successful in economics more broadly is because of this move to the field. In a lot of the updates, we show, “Look, here was the original result—here’s the endowment effect with cups and pens. Now here’s the endowment effect in the housing market, here’s the endowment effect for traders trading options.” Situations where we know that people know what they’re doing, they have a lot of money on the line, they have plenty of opportunities to learn, and these anomalies are still relevant.
Over the last few decades, some of the ideas in the Anomalies column have begun to influence products and policies. Research on inertia and mental accounting led to programs like Save More Tomorrow, for example. I’m curious where you think that hasn’t happened, where evidence is robust, but policymaking hasn’t caught up.
We have plenty of evidence where behavioral economics has been used to hurt decision-making and exploit behavioral biases, but there’s been very little policy to take that seriously. In my view, the biggest practitioners of behavioral economics have not been nudge units and policymakers; they have been private firms trying to use behavioral economics to extract more from consumers. And that is something that (a) hasn’t really been addressed by policy, and (b) isn’t really in the academic work either. A lot of the academic work is still focused on, “Here’s a program to improve decision-making because of people’s biases,” rather than, “Look, people are being exploited explicitly through their behavioral biases, and we should try to curb that.”
Why do you think that is?
Companies are hard to regulate—you need a burden of proof. Economics has a welfare criterion as a burden of proof for government or policy intervention. And we need a high burden of proof for government intervention. This is a good thing.
At the same time, sometimes it’s a bit too high, where the welfare criterion is something like the Pareto criterion, where everyone needs to be better off from the policy. Maybe government intervention should be allowed if the company is worse off, but all the customers are better off, or something like that.
So there’s a higher burden for government intervention in markets, which is both a good thing, and in this case, potentially a bad thing. Here’s an example: In 2011, Facebook wanted to expand by getting more and more users on the platform. How do you get more and more users?
We have plenty of evidence where behavioral economics has been used to hurt decision-making and exploit behavioral biases, but there’s been very little policy to take that seriously.
It used to be the case that you added people on Facebook. You’d say, “Oh, hey, Heather, come to Facebook, be my friend.” What Facebook did in 2011 is exploit the default effect, basically. They said, “The default is all of your contacts will be uploaded when you sign up.” And you had to go through this byzantine process to make sure that didn’t happen, which nobody did, and this exploded the network. And on top of that, people weren’t just invited, they were told this person was inviting you. They were playing with the social element of behavioral economics too. And this is just the tip of the iceberg in terms of what tech companies are doing.
How do you regulate against that? It’s not clear. It’s hard to say, “Don’t use the default effect.” Why? Maybe people want to have all their friends added automatically. It becomes very difficult, but I think this is where the work needs to be.
In writing this book, you spent a lot of time reflecting on the work of the behavioral economists who came before you. In the same way that behavioral economics has changed over the last 30 years, have behavioral economists changed too? Whether in the questions they’re asking, the problems they’re concerned with, or their approach to science—what lessons could researchers today learn from them?
Behavioral economics started as an interdisciplinary field, and then it lost that for a long time. Most behavioral economists working for about 20 years were economists—there were very few psychologists contributing, with notable exceptions like Eldar Shafir and George Loewenstein (who’s very much both). I think what researchers in behavioral economics today can take from the early work is to be very open-minded and to try to be interdisciplinary.
Was there anything that you and Richard disagreed on while you were working on the book?
Not too much. But I think we disagree on the importance of this new research looking at the cognitive foundations for behavioral anomalies, things like complexity, attention, memory. I’m not even sure if we disagree necessarily, but I’m a strong proponent of this research, and I think Richard is a bit less of a strong proponent.
Why is it important to understand the cognitive foundations of behavioral anomalies?
First, because it connects us back to psychologists. But second, when you don’t know the cognitive foundations of these anomalies, it’s really hard to predict them. It’s really hard to say, “When are people going to be more or less loss-averse? When are people going to be myopic, and when are they going to be super forward-looking? When are they going to be narrow-bracketing, and when are they going to be broad-bracketing?”
You have all of this variation in the anomalies—some anomalies even reverse depending on the context. Thinking about how these anomalies are generated by cognitive constraints allows you to set these things up as a maximization problem, where people are trying to maximize what they think are their outcomes. But they can’t attend to everything, they can’t remember everything, so they need to do what they can with what they have.
And what would the counterargument be?
That it potentially introduces a ton of degrees of freedom. When there are a lot of different parameters, the models do become more complicated. But the hope is that this is a journey. With better tools like machine learning and AI, we can have a lot more data on what these constraints are, so we can predict where they bind and eliminate some of these degrees of freedom.
The original edition of this book came out 33 years ago. Imagine another edition 33 years from now. Given the trajectory of the field that you’ve documented in The Winner’s Curse, what do you imagine the updates to the updates will be?
The hope is that there is no edition in 30 years—the edition is the textbooks that we have in economics, because they’ve been updated. Richard’s hope was that behavioral economics would cease to exist as a field; it would just become economics. That’s still the hope—that economics will have cognitive foundations, and we can make very good predictions, and this is all in the textbooks.
Richard’s hope was that behavioral economics would cease to exist as a field; it would just become economics. That’s still the hope.
Now, do I think that’s going to happen? I honestly don’t know. There’s a paradigm shift on the AI, machine learning, and computer science side. With the availability of new data, and the ability to analyze the new data, I don’t know what economics is going to look like in 30 years. I don’t know what behavioral economics is going to look like. Maybe economics is not going to exist as a field in the way we think about it today, and that’s nothing to say about behavioral economics.
How do people think that AI might affect behavioral economics?
There are two views on this. One view is that behavioral economics is going to be completely irrelevant. Everybody will have their own super smart AI agent, that AI agent will make super rational decisions for them, so they will not make any mistakes and there won’t be any biases to study. We won’t need behavioral economics anymore.
I think that’s not correct. I have research coming out showing that these AI agents inherit the biases of people who control them. You can show this empirically. So I don’t think we’re going to be in a world where behavioral economics becomes irrelevant.
In my view, behavioral economics is more relevant now than it was 30 years ago. Even though the promise of tech was that it would be less relevant, we have seen the opposite.
It could be true that in 30 years, tech and AI will have led to more inequality and more exploitation of biases, where people who have resources will be able to use these tools effectively to get more resources, while people who don’t know how to use these tools effectively (because they haven’t been trained to do so) are in worse circumstances or are being exploited more. That’s a situation that we could end up in, and it’s really up to us where we go.
As long as these AI agents are being made by for-profit companies, I can’t imagine that there will ever be a guarantee that these super-smart, perfectly rational AI agents are aligned with your best interests.
Is there more money to solve problems or exploit problems?
Exactly. At the end of the book, you throw down a gauntlet to other behavioral economists. You write:
“We pose only one problem, and it comes with a warning: It may end up being too difficult to make real progress. Furthermore, there is a self-referential aspect to it. What we’re interested in is this: What makes a choice difficult?”
Why is this an important question for researchers to ask? How might the field, and the world, change if we answer it?
We are trying to pose the question: Why do these anomalies emerge in the first place? A lot of these anomalies—and this was going back to Herb Simon—are a response to complexity. So we’re trying to understand what decisions people find complex.
This is the first-order question to predict when anomalies are going to be larger, when they’re going to be mitigated, when people are going to look like they’re acting rationally, when they’re going to look super messed up and super noisy and super anomalous and super biased. This is all about being able to map a situation as being complex or not.
This is a first-order question, and we really don’t have an understanding. We’re starting to. Ryan Oprea, Ben Enke, and a bunch of other behavioral economists are actively working on this question—I don’t want to say that nobody’s working on this. It’s just the early days.
Do we answer this question by looking at where people get things wrong?
That’s part of it. But it’s also very difficult to translate that to notions of complexity. There are notions of complexity in computer science, which is the number of bits that are required to process information. But think about a common problem where people get it wrong systematically, the bat and ball problem: “A bat and a ball cost $1.10 in total and the bat costs $1 more than the ball. How much does the ball cost?”
This is a very simple problem in terms of computer science, but people get it wrong. They answer quickly that the ball costs 10 cents, but it doesn’t. It costs 5 cents. And they do this because of something that they’re finding complicated, or because they don’t realize that the problem is more complicated than it is. People are very confident in their answers, and if you give them high incentives, they will tell you the same answer, they will not ask for help.
There are a lot of seemingly simple problems that people get wrong, so the notion of complexity needs to capture that.

