Throughout history, technological advances have revolutionized the way we work. In many cases, these innovations extended our natural abilities, enabling our bodies to do more, more easily. “Most of the greatest advances of modern technology have been instruments which extended the scope of our sense organs, our brains, or our limbs,” the philosopher and psychologist K.J.W. Craik said in 1943. “Such are telescopes and microscopes, wireless, calculating machines, typewriters, motor-cars, ships and airplanes.”
These old inventions are what Tim Brown, the CEO of IDEO, calls artifacts. Artifacts aren’t themselves “smart”—the people who use them are. Today, as the rapid rise of artificial intelligence brings us to the brink of the Fourth Industrial Revolution, we face a different dynamic: Both people and technology are smart, albeit in different ways.
Though some sectors will see large scale replacement of human employees with intelligent automation, more domains will require businesses to figure out how to interface the capabilities of these new tools with the workforce—to augment workers with A.I., rather than automate business functions entirely. The prospect of A.I.-augmented workers is both promising and unsettling: How can employees and firms ensure that they get the benefits of A.I. without erasing uniquely human strengths?
The prospect of A.I.-augmented workers is both promising and unsettling: How can employees and firms ensure that they get the benefits of A.I. without erasing uniquely human strengths?
One of the foremost challenges will be getting workers to embrace augmentation. During the 19th century, groups of English workers called the Luddites resisted new performance-enhancing technologies, and overcoming that resistance was critical to industrial advancement. Now, new technology raises this same problem, but it does so at unprecedented scale: Augmenting human abilities with A.I. is more complex than using artifacts, and it’s potentially even less welcome.
Since these new technologies play a role in even thinking for us, they pose a direct and existential threat to what it means to be human. Augmentation requires employees to challenge their conception of what their role can and should be in the workplace—and how their humanity can be redefined and extended. And, because these machines can quickly process vast amounts of data, they can yield novel, counterintuitive insights that humans could disagree with. It’s one thing to extend one’s abilities with machines; it is another to use a machine that produces a different judgment or reaches a different conclusion.
Further, A.I. will also require workers to face another assumption about human advantage: People tend to think that certain areas of judgment, like moral decision-making, are best accomplished by human agents, leaving many reluctant to trust machines. Even among those who think augmented technology may aid decisions in general, trusting the output of A.I. in more specific instances is a potential obstacle.
Firms also have a role to play. They will need to leverage behavioral science and psychological research methods to investigate empirically under what circumstances employees accept or reject A.I. advice or decisions. In areas where A.I. looms but has yet to infiltrate, they could use a contemporary “Mechanical Turk” as a testing framework. When the inventor of the original Mechanical Turk unveiled it in 1770, he branded it as a chess-playing machine, with the intent to fool human players into believing it was a powerful, even supernaturally powerful, piece of technology. In reality, there was a gifted human chess player hidden within the apparatus.
So too could people be “fooled” for science. We have imagined an experiment that would test how and when employees trust A.I. in their decision-making: A staff member would play the role of “hidden expert” and create the illusion that advice is coming from the A.I. Employees could choose to incorporate the feedback into their own decision making or not, allowing a company to measure what percentage of the staff were willing to use an A.I. device, with what frequency, and in what situations. Another important area for measurement is the characteristics of employees that are more, or less, trusting of A.I. For instance, how do we know for sure that millennials will always more readily adopt new technology than those in older generations? (Answer: We don’t).
When A.I. decision aids are adopted, analogous measurements can be made to those described above for real time monitoring of when, how, and by whom they are used. Further, firms should also evaluate whether decisions made with A.I. are empirically better than those made without. For instance, in financial trading, are the workers that demonstrate more openness to A.I. input more likely to yield larger gains? And if so, is this because they are overriding suboptimal human biases?
Gathering this data and embracing augmentation will probably be most useful in industries in which people have historically relied only on their brains to make decisions, where there is resistance to acknowledge cognitive limits and biases, and where A.I. has already demonstrated the capacity to improve human ability. Take, for instance, fields in which decisions mean the difference between life and death—such as medical decision-making, where A.I. can now read scans, classify patients by various criteria, prioritize patient care, and aid in providing complex clinical decision support—or where they mean earning or losing millions or billions of dollars, such as in financial trading or advising, where A.I. applications can do things like automatically process news and data to provide humans with insights to make more informed choices and spend more time with clients.
Building an augmented workforce isn’t a race to the finish, but rather it’s an uncertain journey with neither roadmap nor prescribed end point.
Though the benefits of A.I. are promising and openness is important, it is important to proceed thoughtfully. The algorithms and technology that underlie A.I. are imperfect and shouldn’t be accepted with unflinching faith. Rather, the optimal, but likely elusive, arrangement is that the output of A.I. is used when the human is wrong, and the A.I. is overridden when the technology is wrong. Such a tension, between trust and skepticism in the face of augmented decision-making, will require what Gordon Pennycook of Yale University and his colleagues have called reflective, rather than reflexive, open-mindedness. Rather than uncritically accepting automated answers, we must consciously deliberate and reflect, on a case-by-case basis, how best to use the intelligence of computing technology with human wisdom and discernment. Attributes connected to success in such deliberation will be increasingly prized as augmentation expands—The World Economic Forum’s Future of Jobs Report identifies critical thinking, judgment and decision-making, cognitive flexibility, and creativity as Top Skills for 2020 and for the Fourth Industrial Revolution.
Much as employees will need these characteristics to make the best individual decisions, firms will also need to exhibit flexibility and creativity at the organizational level. Building an augmented workforce isn’t a race to the finish, but rather it’s an uncertain journey with neither roadmap nor prescribed end point. Since we can’t be sure which systems and approaches for integrating A.I. are most effective, we must continue to experiment, measure, and innovate. Adjusting to a world characterized by accelerating technological advance isn’t a one-time investment but an ongoing battle for the optimization of extended human ability.
We predict that firms that can most effectively extend human cognitive ability with new technologies and find the ideal alignment between human intelligence and A.I. will have significant economic advantages over competitors unwilling to augment or unable to find the right equilibrium. Though this particular challenge is new, creatively leveraging technology to extend our natural ability is not. It is an integral part of the history, present, and future of how we work—and is truly and deeply human.