Intentional and Unintentional Sludge

It’s 2 a.m. a few weeks into COVID-19 lockdown in England. A mum is online trying to claim a set of £15 government school-meals vouchers for low-income families. With schools closed across the country, low-income families who normally rely on the free school lunch program are suddenly facing an additional expense that they might not be able to meet. She is trying to obtain the vouchers to make sure her kids don’t skip a meal. But she has been trying all week with little success. She’s had 98,000 other applicants in the queue ahead of her, the help line costs up to £60 per hour, and the wait online is eating up her data.

She’s stayed up late because there’s less web traffic at night. Finally, she obtains the vouchers, but there’s a catch—despite the pandemic, it can only be redeemed in-store, not online.

Having come this far, she heads to the store, only to be told at the checkout that the voucher isn’t recognized. Humiliated and desperate, she is forced to leave the shop without any food.

It’s March in New York City. Amid the deadly first wave, the city’s residents find themselves under a shelter-in-place order. With reopening weeks, rather than days, away, a man tries to cancel his gym membership. His gym has closed following New York governor Andrew Cuomo’s orders, but has continued to charge him the monthly $50 fee. He tries to cancel online, only to find he can’t—the gym’s website states that members must call or visit the gym in person to request a cancellation. He clearly can’t visit, so he tries calling. But the calls go unanswered since there are no staff working.

Both of these stories are illustrations of what many mums and gymgoers may have experienced across the United Kingdom and United States as they tried to cope with the pandemic. We, along with other behavioral scientists, would label both as sludge—when users face high levels of friction obstructing their efforts to achieve something that is in their best interest, or are misled or encouraged to take action that is not in their best interest.

We can think of what the English mum goes through as unintentional sludge—friction due to factors like rushed design, poor infrastructure, and inadequate oversight. The mother is trying to access a benefit that will help her and which she has a right to claim, and which the government genuinely wants her to access. Yet multiple barriers prevented her from accessing the voucher that would help feed her children. Millions of parents found themselves in this situation as schools closed in England earlier this year. All over the country schools ended up paying for food parcels and gift vouchers out of their own budgets to help families who were going hungry.

What the New York gym-goer faces is different. It is intentional sludge—friction put in place knowingly to benefit an organization at the expense of the user. The gym doesn’t want him to cancel the membership, which would mean lost revenue. Even absent the pandemic, the policy would be considered unnecessarily difficult to cancel. The gym’s hope is that people forget, give up, or don’t bother canceling in person or over the phone, or that it takes them longer to do so. This translates into revenue for them, without any of the costs of providing a service. Stories like this have resulted in class-action lawsuits against companies that make it overly difficult or impossible to cancel gym memberships. One lawsuit alleged that one large gym company was stealing over $30 million per month from customers.

How do we get rid of sludge? Understanding how to remedy it comes down in part to understanding the motives behind it.

Intentional and unintentional sludge occurs across a wide range of familiar products and services, from accessing benefits to cancelling a subscription, from paying your credit cards to booking travel. Where there’s sludge, there’s an end user who’s come off worse.

How do we get rid of sludge? Understanding how to remedy it comes down in part to understanding the motives behind it.

In this article, we’ll explore cases of intentional and unintentional sludge. In particular we’ll look at examples of people who have taken a stand and how. Fortunately, sludge isn’t something that we have to live with—it can be overcome, thwarted, and even eliminated. But it won’t be easy, and what counts as sludge isn’t always clear-cut. To start, let’s take trip to the dark side—intentional sludge.

Intentional sludge: The dark side

If you’ve booked online travel, tried to cancel a subscription, or attempted to obtain a rebate, you’ve likely encountered features or processes deliberately put in place to make it harder for you to do what you want.

Previously, we’ve written about the ways that hotel booking sites employ sludge to dupe users with tactics like misleading rankings (putting the hotels that pay the site the highest commissions first), pressure selling (“10 people looking at this room right now!”), dubious discount claims (“Was £210 per night, now only £149!”), and hidden charges that appear at checkout.

The upside is that the United Kingdom’s Competition and Markets Authority (CMA) is regulating against many of the practices of hotel and other travel booking websites. The downside is that these sites are just the tip of the iceberg.

In one large study of online retailers, Arvind Narayanan, Arunesh Mathur, and their colleagues developed software to crawl over 11,000 popular online shops for instances of intentional sludge, or “dark patterns” as it is sometimes called. They found that around 11 percent of the retailers (over 1,200) were using sludge in various forms.

Further, they identified fifteen types of dark patterns within seven categories and the cognitive biases and heuristics that the sludge tapped into, including anchoring, bandwagon/social norms, defaults, framing, scarcity, and sunk cost fallacy. Most are covert, deceptive, and hide information from consumers (see table below).

The researchers also found that some 180 retailers even presented false information to users. Some retailers employed fake countdown timers to ensure continual pressure to buy now (the limited time “offer” simply starts again once the timer has run down). Others used random number generators to create social proof messages about how many users were “currently viewing” product. And some even randomly generated customer details. For example, threadup.com peppered users with messages like “Jacqueline from Jacksonville just saved $52 on her order” based on code which inserts preprogrammed names, cities, and products into the phrase “[name] from [city] just saved [$] on her order”.


Researchers identified fifteen types of intentional sludges
In a large study of online retailers, researchers identified fifteen dark patterns. Most are covert, deceptive, and hide information from consumers. Source: Arunesh Mather et al.

Another insidious example comes from TurboTax, the leading U.S. tax preparation software, whose sludge practices make their services anything but “turbo” for many users, particularly those with lower incomes.

Last year, investigative journalists at ProPublica revealed how TurboTax intentionally drove customers to upgrade from their free product to paid versions using deception, excessive friction, and a pain factor for customers who attempt to stick with the free version.

ProPublica found that TurboTax intentionally hid their free version from search engines, despite being part of an industry agreement with the U.S. government’s tax office, the IRS, to provide free file services for low-income earners in exchange for the IRS not starting its own service.

What kind of impact did hiding the free service have?

While TurboTax is only a piece of the tax prep industry, ProPublica reported that 70 percent of U.S. taxpayers are eligible to file for free, but only around 3 percent actually do so, estimating that U.S. taxpayers “are spending about $1 billion a year in unnecessary filing fees.”

“Intuit’s tactics to reduce access to the Free File program and confuse taxpayers are outrageous,” Senator Ron Wyden, the ranking Democratic member of the Senate Finance Committee, told ProPublica.

But that’s not all of the sludge up TurboTax’s sleeve. For example, if a customer managed to navigate to the free version—a version which does not allow you to pull in your information from the previous year—TurboTax showed a 10-second animation of deleting your files and the words “Removing last year’s info and [product name] benefits.” Painful to watch, even if you rationally know you’re better off with the free version.

Critics have also argued that the language used by TurboTax is manipulative, a type of sticky default where even though the customer has not confirmed which product they will use, TurboTax push users to a paid version using phrases like “Automatically import your info with [product name]” and “Delete your info and start from scratch.”

TurboTax user experience design example
TurboTax’s user-facing language is an example of an intentional sludge. Source: UX Collective

Intentional sludge implemented by the likes of TurboTax, online retailers, and hotel booking sites is insidious and consequential. And these are just a handful of examples. However, there is a bright spot—we know about this sludge because experts from a range of backgrounds—scientists, journalists, lawyers, and regulators—are increasingly investigating it and calling it out when they see it. In the case of TurboTax this meant, following ProPublica’s investigation, that they began listing their free version in search results. A step in the right direction in the fight against intentional sludge.

Unintentional sludge: The gray side

Intentional sludge feels nasty and wrong. It’s not too hard to see why. In many cases, there’s a clear perpetrator and victim. Unintentional sludge, however, is in many ways a trickier beast to sort out. It occurs in the form of excessive red tape and poorly designed processes in government benefits programs or high-hassle factors in financial products, and unintentional sludge can have equally bad (or worse) outcomes for the consumer. But it’s often harder to figure out who or what is to blame. This can make the process of finding a solution more complex.

Despite the challenge, a number of organizations, from regulators to behavioral science firms, are beginning to take action to improve things like financial decision-making, higher education, and food security.

In October 2019, two government financial bodies, the Australian Securities and Investment Commission and the Dutch Authority for the Financial Markets, argued that disclosure alone—making all relevant information available to the consumer—is not enough to ensure consumers make good financial choices regarding things like making credit card payments, purchasing home insurance, and investing in the stock market.

Even with disclosure, consumers face the extremely high hassle factor of having to plough through huge amounts of information in order to make a decision. This often means consumers avoid doing so, or skim through it quickly (or not at all) and end up making a choice that may not be in their best interest. Going forward, both of these regulators intend to take a more outcome-focused approach—measuring whether consumers are choosing the best products—rather than assuming disclosure of information is sufficient.

There is a bright spot—we know about this sludge because experts from a range of backgrounds—scientists, journalists, lawyers, and regulators—are increasingly investigating it and calling it out when they see it.

In higher-education in the United States, numerous barriers exist for students applying for financial aid, meaning about 10 to 40 percent of students don’t matriculate despite having gained admission, and many of those who do start their course end up dropping out; only 59 percent of students pursuing 4-year degrees manage to graduate inside 6 years. Barriers include red tape, poor provision of information and deadlines, time-consuming and poorly designed application processes, funding restrictions, and social anxieties about fitting in—particularly for those from low-income backgrounds. The behavioral science nonprofit consultancy ideas42 has worked on over 16 projects to reduce unintentional, excessive friction for students applying and progressing through higher education. They note: “These behavioral barriers are certainly unintentional and often unrecognized, but their impact on student persistence is significant.” Yet, they point out, with tweaks to processes and communication, their pilot studies achieved significant results, such as increased financial aid applications and access, and even higher grades.

Similar bureaucratic barriers also pop up in the government benefits programs in the United States that place work requirements on beneficiaries. Rather than encouraging work, as the program’s proponents claim, work requirements actually make it harder for people in need to access benefits. One way is through clunky tech systems. “The Access Arkansas website where clients must report their hours and earnings in order to keep their Medicaid benefits actually closes between 9 p.m. and 7 a.m.—making it nearly impossible for anybody who works during the day to use,” writes Anthony Barrows of ideas42. 

In addition to overly complex processes or misguided policies, there is also growing recognition of another cause behind unintentional sludge—when social stigma, shame, or embarrassment create psychological barriers large enough to stop people accessing services they need. This might occur when people need to access government benefits or services for personally sensitive issues like weight loss, testing for sexually transmitted diseases, or support for addiction. People may be put off from accessing these services when their actions would be highly visible to others.

An early example of reducing these types of barriers was the renaming and refocusing of the U.S. Food Stamp Program, which in 2008 became SNAP (Supplemental Nutrition Assistance Program) and focused on nutrition rather than welfare. They also replaced the conspicuous paper food stamp coupons for more discreet electronic benefit cards and reduced red tape. The United Kingdom’s equivalent program, Healthy Start, could also make the move to using payment cards to allow more discreet, contactless, and therefore less visible use of welfare benefits. (Scotland recently made the switch.) While there is much more work to be done to reduce stigma-related sludge in SNAP and Healthy Start, these changes represent steps in the right direction.

Unintentional sludge can also often be found in many other areas, from energy tariff pricing, legal aid, and health care provision to government grants, subsidies, and loans. Products and services with unintentional sludge are fertile ground for improvement, because the provider of the service is likely motivated to reduce the friction for their users. However, finding a solution does frequently mean coordinating across multiple agencies and dealing with rigid systems. This brings us to some of the ideas for how to reduce or even eliminate sludge.

Looking to the future

There is a noticeable societal shift in how much sludge—both intentional and unintentional—we are able to notice and prepared to tolerate. A number of academics, legal experts, regulators, journalists, policymakers, and consultancies are now dedicated to fighting sludge. Looking to the future, we can see three clear trends, all pointing to increased efforts to reduce and minimize sludge: our ability to identify and assess sludge, a growing consumer awareness, and regulating sludge by mandate and self-regulation.

There is a noticeable societal shift in how much sludge—both intentional and unintentional—we are able to notice and prepared to tolerate.

In terms of our ability to identify and assess sludge, Cass Sunstein has proposed “sludge audits”—structured assessments of the extent of sludge to increase transparency and provide a starting point for reducing it. To some extent, many behavioral science consultancies already conduct some form of sludge auditing in their regular practice. For example, a consultancy might assess quantitatively and qualitatively the behavioral barriers that exist for a certain behavior.

Another idea has come from the University of Toronto’s Dilip Soman, who has spearheaded an initiative to create a dashboard to enable organizations to self-assess sludge in their process. The scorecard consists of three components: process, communication, and inclusivity. Process refers to the number of steps or length of tasks and their complexity. Communication refers to whether all of the necessary information is provided to complete the task and how simply it is communicated. And inclusivity refers to whether certain groups are explicitly or implicitly excluded. Inclusivity is a novel component; his team noted that some individuals are deterred from accessing a service due to shame or embarrassment. In the hypothetical example below, the organization scores well on communication, okay on process, but badly on inclusivity.

Dashboard to enable organizations to self-assess sludge in their process. In this hypothetical example, the organization scores well on communication, okay on process, but badly on inclusivity.
An example of a dashboard organizations could use to self-assess sludge. In this hypothetical example, the organization scores well on communication, okay on process, but badly on inclusivity. Source: Dilip Soman et al.

Whether governments and private sector companies will adopt some form of sludge auditing or scorecards is hard to predict. In the private sector, companies may start to undertake self-audits if there is increased regulation around sludge practices, in order to avoid fines or penalties. But as consumers become savvier to sludge, some companies may see sludge reduction as a good business move. For example, in May 2020, Netflix announced they will start asking any inactive users if they want to keep their membership. If they don’t want it, or if they don’t respond within two weeks, the company will automatically cancel their service. Netflix may inspire or drive others to follow suit.

In our work at The Behavioural Architects, financial services providers have come to us with the goal of better understanding ethical application of behavioral science, so they can avoid being penalized by the financial regulator.

With regard to more formal regulations, several different committees and consumer organizations have recommended that digital platforms be more heavily regulated, particularly around data privacy and “sticky defaults.” “Notice and consent” agreements for data privacy is likely to be replaced with something that better protects the consumer and is less of a hassle.

The Stigler Committee is one group that has outlined strategies to limit online sludge. They spent over a year studying the conduct of digital platforms and noted that:

“Consumers tend to stick to default options. If forced to choose, they opt for the most salient alternative … [Manipulations are] especially harmful when i) the manipulator knows a lot about the potential customers; and ii) there are limited (or no) alternatives, as is the case for most [digital platforms]. Framing, nudges, and default options can direct consumers to choices they regret.”

The Stigler Committee recommend scrapping click-through or simple pop-up boxes and replacing them with much more effective tools—evidence-based pro-consumer default rules—which acknowledge consumer preferences and facilitate better data privacy and protection.

More broadly, there are numerous calls for a separate independent digital regulator (e.g., an Office for Responsible Technology) or at the very least, increased powers for existing regulators to better control digital platforms and providers. The United Kingdom recently announced that it will create a Digital Markets Unit—a new tech regulator—to oversee digital platforms. With so much unanimous agreement among other governments and regulators that consumers must be better protected online, it’s likely to be only a matter of time before things change in other countries as well.

For the public sector, an independent auditor—the equivalent of the National Audit Office in the United Kingdom—may be required. Policymakers sometimes purposely add sludge and friction in order to prevent fraudulent activity, yet this approach can backfire if friction is so high it deters the very people policymakers are trying to help. As with nudging, in an ideal world, sludge like this needs to be aimed only at the undeserving. A recent trial aiming to reduce fraudulent unemployment benefit claims in New Mexico—a collaboration between the state government, a Harvard University research team, and the consulting firm Deloitte—was able to use machine learning to identify those most likely to attempt fraud with considerable accuracy. They were then able to target these individuals with interventions designed to reduce fraud.

Important caveats

An important aspect of the conversation around intentional and unintentional sludge surrounds what counts as sludge and what doesn’t. In the online retail environment, this means tackling the thorny question of where to draw the line between sludge and acceptable behavior. It’s often difficult to distinguish objectively between what is good marketing and what is deceitful and misleading.

“The important question as a policy matter is what separates a dark pattern from good old-fashioned advertising,” Woodrow Hartzog, a law and computer science professor at Northeastern University, told The New York Times. “It’s a notoriously difficult line to find—what’s permissible persuasion vs wrongful manipulation.”

Arvind Narayanan, the author of the dark patterns research we discussed earlier, has also weighed in on this question. “We are not claiming that everything we categorize in the paper should be of interest to government regulators,” he said. “But there should at least be more transparency about them so that online shoppers can be more aware of how their behavior is being nudged.”

Another important factor is that the same service might be low friction for some, yet high friction/high sludge for another. Dilip Soman has pointed out that one person’s sludge may not seem like sludge to another. Some users may be comfortable with an autorenewal, while others might not. Some may have internet access or a mobile phone to enable them to digitally apply for a benefit, others may not.

One way to address this, to reduce sludge at the right time for the right people, is that the provision of products and services could become much more tailored and sensitive to people’s needs, knowledge levels, language, internet access, and even location. (Though we then may trade one set of ethical concerns for another.)

A final point that’s worth noting related to sludge and nudge: what’s in someone’s best interest will often be different for different people. As policymakers, companies, and behavioral science firms try to “nudge for good,” what “good” means and for who will increasingly come under the microscope. Similarly, as behavioral scientists try to reduce sludge, it will be important to consider the places where what counts as sludge is black and white, and where it’s gray.

Fortunately, we’ve seen a people and organizations who are interested in getting it right for users and consumers. As Richard Thaler has urged: “Let’s continue to encourage everyone to nudge for good, but let’s also urge those in both the public and private sectors to engage in sludge cleanup campaigns. Less sludge will make the world a better place.”


Full disclosure: ideas42, which the author’s mention in the article, is a founding partner of the Behavioral Scientist.