To an untrained ear, Andy Ozment’s comments probably sounded like standard conference-panel fare. This was late 2014, and Ozment, the assistant secretary for cybersecurity and communications at the Department of Homeland Security, was answering a question posed by an audience member at a New York University cybersecurity conference. The question was, Would the DHS continue to emphasize “elementary” security advice?
“If you’re in the field of cybersecurity, a lot of what we’re preaching will sound extraordinarily basic to you,” he said. “It is extraordinarily basic. We as a nation are not at a point where we have done the extraordinarily basic things.”
This, in fact, is decidedly not standard fare among cybersecurity experts. To understand why, you need to first appreciate the degree to which the cybersecurity field is dominated not by “basic things” but by the idea that new and ever more complex technology will save us. This mistakes the real problem. An overwhelming percentage of successful cyberattacks stem from basic human error, not coding bugs or chip flaws. Even so, user-centered approaches to cybersecurity are routinely treated as an afterthought.
This focus on tech has meaningful implications. Cybersecurity firms compete for (and invest in) skilled programmers, mathematicians, engineers, and forensic sleuths whose talents can then be monetized into pricy and proprietary “cybersecurity solutions” at private sector companies.
These firms, of course, have a financial incentive to make cybersecurity seem technical and complex. Indeed, in 2017, Sir Ian Levy, chief technologist for the UK’s Government Communications Headquarters (GCHQ), criticized big network-security companies both for overhyping threats using apocalyptic language and for suggesting that only their expensive “witchcraft” could solve the problem.
“If you’re in the field of cybersecurity, a lot of what we’re preaching will sound extraordinarily basic to you. It is extraordinarily basic. We as a nation are not at a point where we have done the extraordinarily basic things.”
An overemphasis on technology paired with minimal investment in understanding human behavior isn’t just wrongheaded, however. It actually poses a national security threat as well. Most malware is delivered by email, to people, often under guise of fake invoices or password reset requests, because manipulating human behavior is cheap and effective. Take, for instance, the hacking of emails and other data from the Democratic National Committee before the 2016 presidential election. The success of that attack can be pinned on human failures, not complexity or sophistication. It was a trained IT contractor at the DNC who dawdled after being told by the FBI that Russian hackers were on the contractor’s network. This, together with a bunch of weak email passwords, made the job for the bad guys all too easy.
How do we make their jobs harder—i.e., how do we become more secure? First, by understanding what people-centered security might look like, and then by identifying some models that have protected us from threats in the past.
There’s little doubt cyber insecurity is wreaking havoc across American society, and many others. Just in the last few years two different families of malware, the NotPetya ransomware and the Mirai botnet, together likely shaved hundreds of billions of dollars off the global economy through destruction, downtime, and replacement costs. Spending on equipment and services worldwide will soon surpass $200 billion per year, an amount greater than the economies of New Zealand or Portugal.
Yet efforts to get people to do extraordinarily basic things, the things that could prevent (or at least minimize) many of these attacks altogether, are at best second-tier concerns. Most online safety and cybersecurity awareness programs survive on modest budgets, often supported as small tokens of corporate social responsibility by many of the same tech firms responsible for widespread cyber insecurity in the first place.
What’s behind such a big imbalance between the problem and our perception of the problem? There are several causes, but at the core is the fallacy that cyber or information security is primarily a technological problem, not a people problem. And while that may not sound like an insurmountable challenge, a minority of researchers have spent almost two decades asking tech leaders to stop treating users like the “enemy,” but with only modest success.
The crucial first step in rebalancing our approach may well be shedding the idea that security itself is a “product” to be perfected and then handed to people, like an expensive technological gift. In fact, perhaps the biggest untapped resource to make ourselves, our families and communities, and our businesses more secure lies in making “extraordinarily basic” things part of our routines.
What’s behind such a big imbalance between the problem and our perception of the problem? At the core is the fallacy that cyber or information security is primarily a technological problem, not a people problem.
These routines—in essence, behaviors—are different for IT managers, small business owners, or ordinary users, but only to a degree. At all levels of skill and responsibility, there’s overlap: Using strong passwords, patching and updating software, and avoiding suspicious emails are as important for the security chief of a large bank as for you and me.
For example, it’s common to hear in Washington discussions that the vast majority—80–90 percent—of cyberattacks or breaches can be prevented if the users simply practiced good “cyber hygiene.” The factoid is often used to support the argument that people can stay secure on their own, that no systemic changes are needed. In fact, I think the prevalence of human error suggests otherwise. It should be the starting point for further behavioral research and practice, not the conclusion.
Decades of experience with public health, safety, and nutrition interventions could help. Seen through those lenses, the questions we need to ask and answer look different. If the vast majority of breaches and attacks can be prevented by human action, what are the essential elements of safe behavior? How do we get people to make these practices habitual, starting as children? Does the terminology confuse people or deter them from learning how to protect themselves?
It’s no coincidence that cybersecurity terminology—bugs, viruses, infections, immunity, etc.—borrows substantially from medicine. In both domains, achieving good outcomes requires as many people as possible practicing smart behaviors until those behaviors become habitual. In medicine, these behaviors can take the form of handwashing, healthy diet, seat belt wearing, and regular exercise. In cybersecurity, they may look like educated consumerism, factory defaults set to high security, and two-factor authentication. Both health and education suggest models for how we can think about enhancing cybersecurity: by putting people at the center.
Both health and education suggest models for how we can think about enhancing cybersecurity: by putting people at the center.
Possible Models for Human-Centered Cybersecurity
Disease Prevention: It’s common to hear the phrase “cyber hygiene” in reference to basic information-security practices, such as using strong passwords and avoiding unsecured public wifi. The comparison between learned and repeatable common-sense health measures—hand washing, teeth brushing, coughing with mouth covered—makes good sense.
Disease Management: Managing diseases requires ongoing care, not one-time intervention. So too does cybersecurity require vigilance over a lifetime, not an install-and-forget software patch. We almost certainly have something to learn from comparisons with better-known chronic diseases, especially those that are managed through changes to factors like diet and exercise. Hypertension and other cardiac conditions might fit this category. Likewise, patients diagnosed with diabetes can turn to a community of medical professionals, educators, nutritionists, and others to help maintain blood sugar and nudge them toward a healthier lifestyle.
Drivers Safety and Education: To get a driver’s license, we endure, usually as adolescents, a range of tedious hurdles. This model points to a mature set of “soft” and “hard” checks on dangerous driving. This could serve as a promising model for cybersecurity, especially for young people. Not unlike teens (or preteens) seeking their first smartphones and social-media accounts, a driver’s license can be a moment to earn additional freedom and privileges, but only after structured education, testing, and understanding of consequences. What if young people were asked to take similar tests and education to sign up for social media and email?
By approaching cybersecurity through the lens of people, and not only technology, we can make big advances without huge investments or scientific breakthroughs. Progress doesn’t require a digital Marshall Plan or another Human Genome Project. Nor does it mean shaming people, no matter how good or bad they are with technology. Instead, it means recognizing that we’re ignoring a critical unmet need: empirical knowledge about how to train citizens and employees to be safe online.