Staying Smart in a Smart World: A Conversation with Gerd Gigerenzer

In his new book, How to Stay Smart in a Smart World, Gerd Gigerenzer takes a clear-eyed look at how we’re using technology to make decisions—its capabilities and limits.

“Should we simply lean back and relax while software makes our personal decisions?” he asks. “Definitely not.”

Gigerenzer, a psychologist who directs the Harding Center for Risk Literacy at the Max Planck Institute for Human Development in Berlin, has spent his five-decade career studying decision-making. In How to Stay Smart, he takes aim at two opposing camps who share the same assumption—that algorithms know better than humans. There’s the tech-savior camp, who claim we are on an inevitable path toward algorithmic superiority and that’s a good thing; only technology can save us from our feeble-minded selves. And then there are those in the doomsday camp, who claim it’s only a matter of time before an artificial superintelligence wipes us out. 

The task Gigerenzer sets out for himself is to articulate why their shared assumption is flawed—there are times when algorithms will perform better than humans and times when they won’t. And these differences matter for how we think about and behave in our relationship with technology. 

Gigerenzer shows that a deeper understanding of what algorithms do and how they’re being deployed can save us from the whiplash between reverence and resignation. The outcome of this understanding, Gigerenzer hopes, is the realization that we have a choice. That we need not blindly trust or fear the algorithms but can “stay smart” even as technology advances. 

Gigerenzer shows that a deeper understanding of what algorithms do and how they’re being deployed can save us from the whiplash between reverence and resignation.

“Staying smart does not mean obliviously trusting technology, nor does it mean anxiously mistrusting it,” he writes. “Instead, it is about understanding what AI can do and what remains the fancy of marketing hype and techno-religious faiths. It is also about one’s personal strength to control a device rather than being remote-controlled by it.”

How to Stay Smart covers a lot of terrain. From dating algorithms that promise love to mass surveillance that promises protection to the limits of algorithms in an unstable world, as well as the psychology of social media. Throughout, Gigerenzer connects these topics to discussions about human freedom and dignity and reminds us what’s at stake if we give up our personal control. The book, he writes, is “a passionate call to keep the hard-fought legacies of personal liberty and democracy alive.”

This summer I had a chance to speak with Gigerenzer about his new book over video call. He joined from his summer cottage in Germany, I from my office in Prague. In our conversation, we focused on the consequences of the internet’s original sin, mass surveillance, social credit scores, and why what we’re experiencing is less like 1984 than something B. F. Skinner might have dreamed up. We also touch on why understanding what algorithms can and cannot do might help us find the courage to stay smart and stay in control. 

Our conversation has been edited for clarity and length.

Evan Nesterak: I want to start with what you call, borrowing from Ethan Zuckerman, the “original sin” of the internet—the ad-based business model. Could you unpack why you also think this was the original sin of the internet?

Gerd Gigerenzer: The fact that the internet turned commercial is not surprising, and it also has its benefits. But this specific form of the advertising model, personalized advertising, requires that Google, Facebook, now Meta, or anyone else in this business gets as much data, personalized data, from you as possible. That’s the critical point. This is commercial surveillance or a capitalist surveillance model. 

So the original sin is, in my view, that the internet turned from the dream of a democratic instrument, where finally we can stop the injustice in the world by allowing everyone access to the same information (unbiased information hopefully), in some part into something like the opposite: a commercially driven, personalized advertisement business model that is unnecessary. And it only leads to this type of surveillance. 

The personalized ads business model did a few things to the behavior of companies and people. First, companies were now incentivized to keep you on the platform as long as possible. Second, the incentive was to develop huge databases of personalized information. Can you tell me more about how you think about these effects from the original sin, the personalized advertising model?

Facebook, now Meta, makes 97 percent of its revenue from advertisement. You are not the customer, you are the product to be sold. It leads, as you say, to all kinds of attempts to keep people much longer than they want on the site, and to recommend other content, such as on YouTube, that is more and more extreme and nonscientific, because it keeps you there longer.

In my book, How to Stay Smart in a Smart World, I use an analogy that helps many people to understand the situation. It’s about a coffeehouse. Imagine, there is a coffeehouse in Prague, where you live, and it serves coffee for free. Everyone goes there, and all the other coffeehouses go bankrupt. So you have no choice anymore, but you meet your friends there, and you have nice chats. And the coffee is free, wonderful. The only disadvantage is that on the tables, where you sit with your friends, there are microphones, and there are video cameras on the walls that record every word you say. And they send off the data. It’s being analyzed. And the coffeehouse is full of salespeople who then interrupt you all the time and offer you personalized products. So that’s the situation, in an image, that we are now in, it’s the business model. And it also gives you an idea how we could get out and what your alternative would be. Namely, we want a real coffeehouse, and we want to pay for our coffee and not with our data.

Recently, there’s been a number of discussions about personalized interventions using algorithms. The idea is that if we could tailor interventions to specific individuals, they will be more effective. I’m wondering, is there a cautionary tale to be learned about how personalized advertising and personalized data have played out in the development of the internet for those aiming to use algorithms to personalize interventions?

Algorithms are good for certain problems, but not for others. The problems where they’re excellent, these are well defined games like Go and chess, where we have no change and other stable situations. I call this the stable world principle. If you have a routine application in industry or a well-defined chess game or something else which doesn’t change over time in an unexpected way, algorithms will be much better than humans. 

But in unstable situations, for instance when human behavior is involved, or when you have to predict the future, then they are not good. For instance, have you heard anything about accuracy in predicting the coronavirus? Google tried to predict the flu, but it failed. The flu is an unstable object. 

In unstable situations when human behavior is involved or when you have to predict the future, then algorithms are not good.

If someone wants to predict whether a defendant will commit another crime in the next year, that’s a highly unstable and unpredictable situation. Keep your fingers off these recidivism algorithms. Same with predictive policing, it doesn’t work very well. Rather, train the experts and put a lot of knowledge into them.

In the book, you reference psychologist B. F. Skinner’s idea of freedom a few times. How does some of Skinner’s thinking relate to what we’re seeing now, with technology, algorithms, and behavior?

In the doomsday prediction, people compare AI to 1984, the novel. But I think that’s the wrong comparison. The situation is much more like Skinner envisioned it. 

Skinner was the most well-known psychologist during his lifetime, and one of the most controversial ones. And he had an idea that is very different from most other people who think, “So why do I behave? Because I wanted it.” So it’s something, an inner wish or desire, and then there’s an arrow to behavior. 

Skinner said that’s an illusion. Our behavior is controlled from outside. It’s from what others want. And this is why he thought entire ideas about dignity and freedom are illusions. We are under control of stimuli from outside. 

Now, going into social media, which they didn’t have in his time, the reason we constantly check whether there is something—news or a response to our own post—is, according to Skinner, because we have been conditioned. There is a reward, maybe social approval, a like or a nice comment. The award doesn’t come every time, but it comes sometimes, unexpectedly. That’s intermittent reinforcement, and that makes the behavior much stabler than it would coming every time. You wouldn’t care anymore. 

Skinner’s idea that we get under the control of the algorithms others use to control us, that’s the more realistic picture than 1984.

Skinner’s idea that we get under the control of the algorithms others use to control us, that’s the more realistic picture than 1984. Most people, who are in the coffeehouse that we talked about and don’t have to pay, they enjoy their time there. And they don’t care about having no choice but going there. That’s very different from 1984, where at least at the beginning people have to be coerced into proper behavior. 

So Skinner would probably like what we are seeing now. But he also might not like so much that the goal of all of that is to increase the revenue from advertisers. Because he had real ideals, which were beyond commercial. He really believed that the only way to get war or famine out of the world is to condition people when they are children, so that they exhibit proper behavior and good behavior to others. And he thinks the idea of freedom produces what we have had all the time, war, more war, and more crime.

That brings us to something you wrote about in relation to the social credit score in China and what happens in the West. There may be more similarities than we think. What’s happening with China’s credit system and the way platforms are aggregating our data in Europe?

Let me briefly describe the Chinese social credit system, which is in an experimental stage. It’s been tried in about 44 cities as far as we know. In the US, you would have a FICO score, and every country has its own, which tries to measure your credit worthiness—assessing whether you will pay back a loan, for instance. Now imagine that this score includes not just your credit record, but any data that one can get their hands on. So your criminal record, whether you’ve crossed the street on a red light, everything you buy. In China, for instance, if you buy baby products, that’s good. If you buy certain video games, bad, you lose points. All your social behavior one can measure, so it’s mostly digital, and also your political attitudes. In China that would mean if you enter as a search term “Dalai Lama” your score goes down. 

The next step is that depending on your score, if you’re a high score, you get reinforcements. That’s the Skinnerian part. You may, for instance, in a hospital get treatment earlier than people who have a lower score. And those with a low score can get any kind of punishment. That includes the tens of thousands of Chinese who were not allowed to purchase plane tickets in the last few years, because they were too low of a score or that your children are not allowed to go to the best private schools. It’s a total Skinnerian control. Also, they bet more on the positive side, on the goodies—like Skinner said, that is more effective than punishment. 

The idea, of course, is that people conform to the rules and you get better citizens. So China has, like many countries, problems with crime, non-social behavior, corruption. The interesting thing is the social credit score also applies to companies. When the washrooms of companies are in good shape, point up, otherwise down. That’s basically the system. 

Many Chinese, as far as we know, think it’s a good system, because the good ones will be reinforced, and the bad ones will be punished. In the West, we do have diverse credit systems. And most people in the West, if they hear about the Chinese social credit system, they are appalled. But at the same time, they’re willing to rate everyone else on the internet, or an Uber. It’s like in Nosedive, the Netflix movie, where the rating is between people, not the government. 

There are data brokers like Acxiom, who collect data all over the world. I live in Germany, and Acxiom says they would have data about roughly 50 million Germans and up to 3,000 data points. We do have that kind of social credit, and that could be easily integrated into a single score. If we let this happen, what’s going on?

People in Europe or the United States might be appalled when they hear about the social credit score in China. But you write they often aren’t aware of the data that’s being collected about them.

Many people have no idea about the existing degree of surveillance. For instance, most people do not know that a smart TV may record all personal conversations they engage in in front of their TV, whether it’s in the living room or in the bedroom.

Even the child’s room is being surveilled. Commercial products like Hello Barbie, an incarnation of the Barbie doll of Mattel, record everything the little girl trusts her beloved doll with. Every worry and every sorrow recorded and analyzed for the purpose of marketing products to the little girl and also offering the parents to buy the record to spy on their child. Now imagine that a little girl will find out about that—her beloved Barbie doll, Hello Barbie (which got the Big Brother Award) spied on her and parents too. She may lose trust in the world she’s growing up in. But there’s an even deeper scenario. The little girl may not lose trust in the world, because for her it’s natural to live in a world being surveilled.

This starts to get at perhaps more philosophical questions about what does it mean to be human, in relation to ideas of freedom and autonomy. Or, what does it mean to be a good citizen? It seems we’re touching upon much larger questions about our lives and how we live them. Where do you see this going?

My personal take is not on paternalism. I think there will always be some kind that is necessary. But my vision of people is that we should educate them so far as we can, so that they can make informed decisions themselves. That sounds trivial. But it is not the case. One of my fields I’ve worked for a long time in is health care. Most people are misinformed about, for instance, vaccination or cancer screening, and it’s often a conflict between politics and commercial interests. So that’s done by nudging people into a certain direction. 

I envision a world of freedom and dignity. Not freedom of choice, per se, because that’s not the idea; it’s freedom of informed choice. Choice just means that doors are open for you. But choice means that you understand what’s behind the doors. 

My vision of people is that we should educate them so far as we can, so that they can make informed decisions themselves.

So my view is to make people strong. At the Max Planck Institute for Human Development, where Ralph Hertwig and many others work, we called it boosting. Boosting people, making them strong and letting them make their choices, rather than nudging and steering them into a certain direction desired by some choice architect.  

So your question is, how is this going, this vision? One problem is that digital technology can be easily misused by those who favor paternalism. And paternalism is nothing new. We always live in this paternalism, there are always people who thought they were superior and tell the inferiors what to do. 

It’s a form of technological paternalism. For instance, you hear former CEO of Google Eric Schmidt saying that he envisions that people type in a question, Google gives them the answer, and then they do it. That was about 10 years ago. Later, he had an even better idea. People don’t even ask their own questions, they’re just getting instructions from Google. What should I do today? What job should I take? Google knows better than I. Who should I marry? Google knows better than I. All these illusions about the precision and accuracy of algorithms that they cannot deliver. 

But the point is, even if the algorithms are much more inferior than they are hyped, they’re used by people to get this type of paternalism. And when people just follow the recommendations, the paternalists have won.

Is there anything that we didn’t cover that you want to touch on?

I want to emphasize that it pays to learn a bit of statistical thinking, in order to evaluate what an algorithm really can do.

If you understand that a deep artificial neural network is basically a sophisticated version of a regression technique, a nonlinear multivariate regression, then you understand that this thing will never turn itself into a super intelligence. Nobody would think that statistical techniques, even if you push them with computing power, would ever be more. It’s like a car—even if you have not hundreds of horsepower but thousands or millions, it’s still a car. It will be faster, yes. It’s still a car. 

Living in this great time means trying to understand what algorithms can do and what they cannot do, and being wary about marketing hype and techno-religious faith. Have the courage to keep your life in your own control, in your own hands.

Also what I think is important for readers, as it is for me, is to have the will and motivation to keep the remote control for your own life and your emotions in your own hands. To keep control, not to give it away. About 75 percent of videos in YouTube have not been selected by the people themselves; they follow recommendations. This is a basic attitude. Do you want to be guided by others during your life? Maybe. It’s easy. Or, do you want to look behind the scenes? Who is behind it? Why do the algorithms guide you there? Because the personalized advertising system is designed to keep you longer, to lead you to more exciting and less scientific pages. 

Living in this great time means trying to understand what algorithms can do and what they cannot do, and being wary about marketing hype and techno-religious faith. Have the courage to keep your life in your own control, in your own hands.