A Conversation with Simon Johnson about Technology and Prosperity

“A thousand years of history and contemporary evidence make one thing abundantly clear,” write newly minted Nobel laureates Daron Acemoglu and Simon Johnson in their book Power and Progress. “There is nothing automatic about new technologies bringing widespread prosperity. Whether they do or not is an economic, social, and political choice.” 

This summer, I interviewed Johnson about the relationship between technological progress and prosperity, including how societies have made these choices in the past and what our decisions about the current wave of AI could mean for our future.

The occasion for our conversation was Behavioral Scientist’s Summer Book Club, Techno-Visions. We read Kurt Vonnegut’s Player Piano about a future society run by a special class of engineers, their supercomputer, and the people at the mercy of both. Player Piano is Vonnegut’s warning of what could come to pass if we prioritize the needs of machines over people. It’s a vision of a possible future that hasn’t proven as false as we might have hoped. 

“At the heart of Vonnegut’s Player Piano and our own uncertainty about the future lies the same question,” I wrote in the introduction to the book club. “Where exactly is the technology we’re developing taking us?” 

In our conversation, Johnson helped make sense of this question through the lens of economic history. He also shared his perspective on the kinds of work AI will likely transform, how to better evaluate the utopian and dystopian visions of AI, and the tension between building what we can versus what we need.  

Our conversation was originally intended exclusively for the Summer Book Club. But after Johnson was awarded the Nobel, we decided to bring the conversation to our entire audience. It has been edited for length and clarity. 

Evan Nesterak: The level of investment and interest in AI makes it seem as if our society is on the verge of a tremendous change. Depending on who you talk to, that change might be catastrophic or it might be wonderful. Do you think we’re on the precipice of some unprecedented societal transformation? Or do you think change will be less pronounced than it’s being made out to be?

Simon Johnson: My current assessment—and of course in this age of technological development we have to reserve the right to change our opinions tomorrow if we see more new things coming—is we are not on the precipice or about to experience a sudden acceleration. But if you take the long historical view, the United States has for most of the past 150 years generated technology and changed itself in the light of that technology faster than any human society has ever done. The United States has reinvented itself sometimes for better and sometimes for worse, repeatedly. My business as usual involves a lot of societal transformation, but not the sudden acceleration and not the sudden bliss point, or singularity, or other things that people imagine.

What sectors do you think are most likely to transform, and who might find themselves displaced? 

AI is a form of automation. Instead of replacing people doing manual tasks with their hands, we’re replacing people doing tasks with their minds. The key element is, How routine is that task? This is something that my colleague at MIT, David Autor, has built his career around measuring. He, Daron Acemoglu, and I run a center at MIT that attempts to follow these issues.

It seems to us and to others who look at this is people who have jobs with a relatively routine component [are at risk of being displaced]. Customer call centers handling routine matters, for example, or people doing paperwork that is essentially filing the same thing over and over again, or people who translate or modify existing materials where the modifications are all quite small. One CEO called these positions—and this is not my term and it’s not a nice term, but he’s a CEO and he employs people, so I think it tells you something—he called them “cut-and-paste jobs.”

When you talk to people at LinkedIn who have a lot of data on postings, positions, and transitions, they talk about it being not the bottom 20 percent in job compensation and not people who are doing a lot of interaction person to person. But between the 20th and 30th percentile, there are a lot of people who have relatively routine white-collar jobs and those people I think are most immediately in the line of fire.

A fundamental point you make in Power and Progress is that prosperity as a result of technological progress, far from being guaranteed, is a choice. Could you tell us more about the several of those choices societies have made, and when it’s gone right, and when it’s gone south?

The modern story begins about 250 years ago with the advent of what we now call industrial technology—machines being applied to various tasks, either previously done by humans or not done by humans at all. What’s really interesting about the modern age is we have continued to innovate and create new machines to do things. Productivity has increased, and productivity changes have been pretty steady. 

But shared prosperity, the key word is shared, that has been very different. For the first 80 to 100 years in the Industrial Revolution, people built enormous factories never previously imagined. They put together the silk factories, then the wool factories, and then of course the cotton factories. Most of the people who worked in those factories did not live better than they or their forebears had. In fact, a lot of people who were in the cotton industry as skilled weavers, for example, they did well when spinning was mechanized at the end of the 18th century. They did badly when weaving was mechanized. And that generation and their kids struggled to gain any kind of prosperity or decent life in that modern economy.

Change also came as machinery spread and as industrialization became a broader process—a lot more opportunities and new tasks were created. I would highlight the creation of the railways. Railways were providing long-distance transportation, obviously, which had previously been offered by horse and coach, which is pretty uncomfortable and you weren’t able to move a lot of stuff, or by canals, which was extremely slow.

All of a sudden, you could pick up milk at the farmer’s gate at four o’clock in the morning and get it into London in time for people to buy fresh milk in the market. You could go to the seaside for a day to Blackpool, or Brighton, or one of these other places that belong in British folklore, and see things differently, and meet different people. That had never been possible.

This is obviously a bit metaphorical but also instructive. The number of horses involved in transportation in Britain went up as the railway spread, not down, because horses were used for local connections. If you read a Sherlock Holmes novel, he takes the train out to Sussex, he comes back by train, and he takes a horse-drawn cab back to Baker Street.

Now that obviously didn’t last forever. The horse was replaced by the motorcar. But there was an overall expansion of opportunities, there was an increased demand for labor. A lot of these jobs in the 19th century, good jobs, important jobs, did not require a formal education.

Changing the nature of technology did play out in a way that helped a lot of people, primarily because it increased the demand for people who didn’t already have a lot of formal education. It created new opportunities, people got more education, and the trade union movement developed, a very important part of sharing prosperity. 

So there’s your negative and your positive, all in the first 150 years.

An assumption about work is that the value of work amounts to its material rewards. Vonnegut critiques that view in Player Piano. One of his main points is that there’s a huge psychological dimension to work that we often neglect. As technology has progressed, how at different points has the psychological aspect of work been considered through different technological changes?

I think Vonnegut was ahead of his time. It’s a point that we’re grappling with today. But he was writing in the 1940s and 1950s, and I think he was romanticizing the nature of work, particularly in large factories.

If you look at those jobs, a lot of it was extremely repetitive, very boring. There’s a high degree of burnout. Charlie Chaplin, of course, famously captured this with his portrayal of the production line.

Not too many people miss the idea of pounding the same piece of metal in the same way for 8, or 10, or 12 hours a day, six days a week. So the transformation of work and making work more interesting, and more demanding, and better compensated, that’s an important part of what happened for much of the 20th century.

What’s interesting about Player Piano—of course, it isn’t universal basic income that these people have—they have a job, they have things to do; it’s that those things aren’t valued. There’s no social status. They feel abandoned and rejected. And I have reservations about universal basic income. I think telling people, “Right, you’re in category B,” that sounds very Aldous Huxley to me, if you want to talk science fiction.

But making that work rewarding, making people appreciate and understand it, having it well compensated, reducing the working hours, allowing people to live better lives around that income earning opportunity, which is more interesting and more fulfilling, I think that’s the sweet spot. And Vonnegut was quite brilliantly alerting us to this problem.

Over the past year or two, we’ve been inundated with different visions for the future based on AI. What I appreciated about Power and Progress was that you were looking at what happened in history when similar visions were being proposed to the public. As we read about AI or we hear tech executives say things like, “This is going to completely change your life for the better,” what questions should we be asking to be thoughtful consumers of these visions? 

I think the problem with tech visions, and these exaggerated views on either side, are that they focus on the technology itself. What we should really focus on is, What’s the problem you’re trying to solve? What is it that you’re trying to address? Is it homelessness? Is it inequality? Is it something else about opportunity? There’s a long list.

Just saying, “We have invented this thing. It will have fantastic effects.” Well, many people have said that. Historically, it turns out to be much more complicated.

We should focus on, What is it we want to change about our society or the way we live individually or collectively? And how do we mobilize technology to improve childcare, to improve healthcare, to prevent or anticipate the development of certain kinds of cancers?

Because if you don’t articulate those goals, and share them, and get people to buy into them and mobilize resources, chances are you won’t achieve it.

If the U.S. positioned itself more as the problem-solving hub of the world looking for win-win solutions, that would be quite transformational. I think the potential is there, and that would generate a lot of good jobs in the U.S., and it would help a lot of people around the world. But it’s not the zeitgeist.

You write in Power and Progress that a lot of the early pioneers developing computer technology were rebels or hackers pushing back against large corporate structures. Could you talk about the origins of modern computing technology and what the hopes of those early technologists were versus where we ended up today, with the concentration of power and influence in a handful of executives at a handful of companies?

This is sort of the paradox of the modern computer industry. Remember, initially, the computer business was highly concentrated, in part because of the research and development money needed, and in part because of the winner-take-all characteristics of a government contract. IBM in the early 1970s was the most valuable company on the American stock market. 

The hackers who rose up around semiconductors started to think about ways you could hack your way around or build alternatives to this very Vonnegut type of corporate structure—top down, rigid, hierarchical. Would it be possible to build something that was more anarchic, more decentralized, more empowering? 

And of course that is what they built. We are using technology descended from that, you and I today. People with a lot of education have done well over the past three to four decades. It’s not just a few tech people who’ve done well. But there is a big winner-take-all dimension to this. 

Neal Stephenson’s Snow Crash is one of the most prescient science fiction novels, because he anticipated the way technology would erode a lot of existing social norms and compacts, and also put a tremendous amount of power in the hands of a few, in his case, telecoms executives. 

It’s a paradox that they believed they were creating something that would spread power much more broadly. They put a lot of opportunity in the hands of highly educated people, and people with some specialized skills, who could build global brands, and they contribute to the disempowering of many Americans. As we’ve discussed already, it’s a winner-take-all digital world.

One of the things that I’ve heard predicted is that it will be possible for a single person to build a billion-dollar company alone by leveraging the capabilities of AI. Is this possible? And even if it is possible, wouldn’t it rebalance over time if multiple people have access to the same powerful technology?

I think it is entirely plausible that the so-called winner-take-all characteristics of the digital transformation we’ve seen over the past three decades plays out in a way that continues and exacerbates inequality across multiple dimensions. Sectors will change, technologies will change. It’s not that there’s a lock-in forever. There’s competitive pressure in an economy like the United States, but there is this irresistible tendency for concentration. That does mean at the individual level, like you said, one person can leverage their skills and their insights through algorithms and have an enormous impact, an enormous reach, and a big market cap.

I could imagine the next science fiction novel, perhaps building on Player Piano, where there is a single person running a company that has immense control and leads to a two-tiered society. Somebody from this era will have to write that one. 

Or the AI will run itself. Autonomous AI where it receives some initial human instruction. This is of course a fascination in the science fiction literature among the techno-optimists, but the techno-pessimists too, that AI will create further AI, and be better at creating further AI than we will. And that further AI will receive instructions, some of which could be intentionally malicious, some of which could be not intentionally malicious but will just turn out to be bad for humans. I think it’s a serious possibility that we should worry about.

I want to ask for your take on a conversation I had with a friend who runs a technology company. He was arguing that AI is going to make productivity higher across the board. My sense is that we only have so many hours in the day to consume something, whether it’s a book, or Netflix, or time with our family, whatever it might be. So if we’re able to create all these new things much, much faster, there’s still a limit on what one individual can consume. The question is, Where is all that productivity going to go? There’s a fundamental limit on how much more I could add to my life. Even if we could produce at 10x, I don’t feel I could add 10x to my life. 

One important dimension of productivity and one difficulty of measuring productivity in the modern world is that people change what they consume over time. We try to measure that by the value, by how much they pay for it, but that’s pretty imperfect. So yes, watching 10 times Netflix probably is not appealing, and probably not good for your health either. But other things may change, your capabilities or something about how you spend either your working time or your leisure time will change almost certainly.

One thing that economists have expected for a long time—but economists are really bad at predicting the future—is that we’d have a lot more leisure. One way you might see high productivity is everybody only works three days a week, which is also not so far from the Vonnegut vision in Player Piano. Except there, some people are working full-time and feel very fulfilled, and some people are doing makework and feel pretty frustrated.

But what if the gains from higher productivity then and now were more evenly spread? And what if we all agreed to dial back on the intensity of the working life? Wouldn’t that be better? More leisure, more time to yourself, more creative time, more time for your hobbies.

Productivity can and will continue to increase. I don’t think it’s going to accelerate massively, as I said already. And you’re right, there are limits on what we consume and of course limits on what the planet can produce. If everybody lived at the income level of the United States, you’d need several more planets to produce that amount of stuff. But one way we might take high productivity gains is by using resources more effectively, all kinds of resources, and also by respecting the planet and also by limiting climate change and so on. 

There seems to be a difference between articulating a goal and then marshaling the tech to that goal, versus creating the technology simply because we can create it. There’s a line that Jeff Goldblum’s character says in Jurassic Park, “Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.” I think that’s a theme Vonnegut is exploring in Player Piano, an idea about technology that he was partially inspired to explore by Norbert Wiener, whom you quote in the opening of Power and Progress

I’m very glad you picked up on the Norbert Wiener point because my interpretation is exactly that Vonnegut was reading Wiener’s work then using it in the right way. The epigraph for Power and Progress is a quote from Wiener, 1949: 

“If we combine our machine potentials of a factory with the valuation of human beings on which our present factory system is based, we are in for an Industrial Revolution of unmitigated cruelty. We must be willing to deal in facts rather than fashionable ideologies if we wish to get through this period unharmed.” 

I think Wiener and Vonnegut should be read together. Wiener, of course, was a brilliant mathematician and engineer of cybernetics, a forerunner of AI, if you like.

So I think that, and the point of writing the book was to say, all of this is within our control. This is not the 1750s. We are not struggling for political representation. We are not oppressed by the remnants of an aristocratic class or whatever merchant industrialists are rising. This is a deeply democratic, articulate, educated society in the United States and also other parts of the world. We can and should have these debates, and we should make the choices, and we should decide what we want to pursue.

But you’re right that the default remains that we will be drawn repeatedly back to what people can invent, the path of least resistance. Sometimes that delivers good things. Sometimes you can tweak it to deliver good things. And sometimes it delivers really bad outcomes from most people. So let’s be aware of that, and let’s think about technology on that basis.


This article builds on an earlier version published for Behavioral Scientist’s Summer Book Club, Techno-Visions.