It’s taken for granted now that behavioral science is the future. Applied behavioral science has become part of the “innovation agenda,” and, with behavioral units proliferating, its methods seem destined to make a lasting mark on how governments conduct themselves. (At the risk of obviousness, we at the Behavioural Insights Team have gone as far as to adopt Futura as our new house font.) Eventually we’ll look back at (and down on) the old way of doing policy—sans evidence, sans testing—as modern doctors look back on their leech-wielding forebears.
But, for all this talk of innovation, less has been said about what the future of behavioral science will actually look like. Now, it’s not that we are entirely in the dark. There is no shortage of suggested next steps, as a quick search for “going beyond nudge” reveals (the most recent edition of the Behavioural Public Policy journal collates some good ideas which avoid that cliché). Over the longer term, it is relatively easy to imagine our favorite indicators—GDP, life expectancy, well-being, and so on—trending gently upwards, one RCT at a time. But if we have some idea of the trajectory for the next 5 or 10 years, substitute in a 50- or 100-year horizon, and the supply of coherent visions starts to dry up.
Does this lack of long-term vision matter? If it does, can we do anything about it?
Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts.
It’s true that any prediction made a century out will almost certainly be wrong. But thinking carefully and creatively about the distant future can sharpen our thinking about the present, even if what we imagine never comes to pass. And if this feels like we’re getting into the realms of (behavioral) science fiction, then that’s a feeling we should lean into. Whether we like it or not, futuristic visions often become shorthand for talking about technical concepts. Public discussions about A.I. safety, or automation in general, rarely manage to avoid at least a passing reference to the Terminator films (to the dismay of leading A.I. researchers). In the behavioral science sphere, plodding Orwell comparisons are now de rigueur whenever “government” and “psychology” appear in the same sentence. If we want to enrich the debate beyond an argument about whether any given intervention is or isn’t like something out of 1984, expanding our repertoire of sci-fi touch points can help.
As the Industrial Revolution picked up steam, accelerating technological progress raised the possibility that even the near future might look very different to the present. In the nineteenth century, writers such as Jules Verne, Mary Shelley, and H. G. Wells started to write about the new worlds that might result. Their books were not dry lists of predictions. Instead, they explored the knock-on effects of new technologies, and how ordinary people might react. Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up. In Shelley’s Frankenstein, there is the horror of creating something you do not understand and cannot control; in Wells’s War of the Worlds, peripeteia as humans get dislodged from the top of the civilizational food chain.
Invariably, the most interesting bits of these stories were not the technologies themselves but the social and dramatic possibilities they opened up.
If these sci-fi writers borrowed ideas from the scientists and engineers of their day, they paid them back with interest. We can now point to many real inventions that first appeared on the pages of science fiction novels. Examples include modern submarines, lunar modules, the iPad, and the World Wide Web. Indeed, this trend is so pronounced that we have started to use science fiction as the benchmark against which actual advances are judged. See, for example, Peter Thiel’s pithy diagnosis of the recent malaise in the tech sector: “we wanted flying cars, instead we got 140 characters.”
For a neat example of how this back-and-forth plays out, we can look to the life of Konstantin Tsiolkovsky. The “Russian father of rocketry” cites Jules Verne’s fiction as one of his great influences. (“I do not remember how it got into my head to make the first calculations related to rockets. It seems to me the first seeds were planted by famous fantaseour, J. Verne.”) Among his many achievements, Tsiolkovsky might be most famous for being the first person to conceive of the space elevator. Although his own efforts at science fiction never came to anything, this idea—that a cable tethering the Earth to a distant counterweight could get us to space without the need for rockets—captured the imaginations of many of Verne’s successors, Arthur C. Clarke, Robert A. Heinlein, and Iain M. Banks among them (note to aspiring sci-fi writers: make sure you’ve got a good middle initial). As today’s technologists draw inspiration from these authors, we can look forward to the cycle repeating itself.
This mutual inspiration has been a boon to the physical sciences. What benefits could behavioral scientists reap?
As a starting point, Isaac Asimov’s Foundation might be the urtext for social science inspiration. Paul Krugman, for example has written eloquently on how Asimov’s “psychohistorians” influenced his decision to become an economist. Led by the enigmatic Hari Seldon, the psychohistorians perfect a mathematical social science so potent that it can predict the sweep of history thousands of years into the future. However deep we’ve buried our physics-envy, each of us might feel a twinge of envy as Seldon’s formulae stave off the worst of a millennia-long dark age.
Of course, this is not a realistic vision—the best evidence we have at the moment suggests that forecasting events with any accuracy gets extremely hard 12 months out, let alone thirty millennia. But, as Krugman’s Nobel Prize (and founding contribution to the nascent theory of interstellar trade) testifies to, it is an inspiring one. With Philip Tetlock’s research into the existence and cultivation of superforecasters making big strides, Asimov’s work can be a reminder of just how socially important good forecasts can be. After all, we can be sure that Thomas Friedman’s “vague verbiage” predictions will not be averting a dark age any time soon.
However overdue, the replication crisis has been a bracing experience, and it’s hard to say where it will go next. So it’s not a subject for the long view afforded by science fiction. The work of Chinese author Cixin Liu is particularly instructive. In his Three-Body Problem series, humanity’s first contact with aliens sours quickly. Seeking to escape a home planet precariously poised between three wildly oscillating suns, the “Trisolarans” promptly dispatch an armada to conquer Earth. To slow down human scientific progress in the four centuries it will take their fleet to reach us, they dispatch “sophons” to Earth—essentially subatomic particles with a brain and a sense of mischief. The sophons set to work sabotaging the results of every particle accelerator experiment the humans run, stopping fundamental physical theory in its tracks. The result is a sort of mega-replication crisis.
If you’re wondering how our terrestrial replication crisis might play out, what happens next is good fodder for the imagination—or nightmare fuel, depending on your disposition.
If you’re wondering how our terrestrial replication crisis might play out, what happens next is good fodder for the imagination—or nightmare fuel, depending on your disposition. As scientific progress comes to an unscheduled stop, it’s not just the careers of researchers that are affected. Over the decades, the whole of human society lurches between black depression, live-for-the-present egoism, and blithe optimism. Meanwhile Liu’s heroes, insofar as there are any, tend toward a grim realism. They share a willingness to take the unpalatable options remaining to humanity. The results are mixed, to put it charitably.
We don’t need to overlabor the comparison to our situation here: humanity’s coping strategies for a hostile galaxy won’t change your views on the merits of p-values or preregistration. But Liu’s exploration of how science is bound up with wider social conditions gives us plenty to reflect on. Behavioral scientists have, with good reason, made big promises about the good our approach can do for the world. If a crisis, replication or otherwise, makes steady progress harder than expected, cultivating an active imagination about what comes next will have been time well spent.
The benefits of fiction to any practicing behavioral scientist are widely appreciated. Tyler Cowen, for example, recommends a steady diet of fiction to fellow economists who spend their days deep in mathematical models: too-simplistic stories about human behavior generally can’t withstand contact with Eliot or Dostoyevsky. In these pages, Elspeth Kirkman and Michael Hallsworth have recently shown how the works of Dickens, Auden, and Kafka can be mined for nuggets of behavioral insight. But as “genre fiction,” sci-fi often gets ignored by serious readers. This is a missed opportunity. A big part of applied social science is imaging how things could be different. We need to think harder about the future and ask: What if our policies, institutions, and societies didn’t have to be organized as they are now? Good science fiction taps us into a rich seam of radical answers to this question. It can help us spot when our current way of doing things is not the only, inevitable, and unimprovable option, but one of uncounted alternatives. As the example of the physical sciences shows, science fiction has long had a way of becoming science fact. In the behavioral sciences, we would do well to embrace the same possibility.