Like thousands of families across the country, my girlfriend’s family succumbed to adopting a pet during the pandemic.
We didn’t have a name picked out when he first arrived, and someone joked that he looked like a Felix, and so the name stuck. We thought he was adorable—he has a round body with a glistening black coat—but he would sometimes wake us up in the middle of the night with his incessant beeping, a sign that Felix, the automated vacuum cleaner, was trapped under the couch and yelping for attention.
He is not, of course, a pet in the traditional sense, but nevertheless Felix quickly became an integral part of the family. Whenever I catch up with my girlfriend’s parents, they always bring up Felix’s latest misdeeds—like the time he became hopelessly ensnared in their holiday string lights—and we would always celebrate when Felix completed a full clean cycle without a hitch.
Felix, and other robots like drones, dolls, and companions amount to what robotics ethicist Kate Darling calls a “new breed,” alongside traditional household pets and other animals, in her book The New Breed: What Our History with Animals Reveals about Our Future with Robots.
I met up virtually with Darling, a research specialist at MIT’s Media Lab, for a conversation about her new book in which she argues that it’s more useful to turn to our relationship with animals—rather than other humans—to better understand robot-human interactions. We discussed the promises and pitfalls of our future relationship with robots and how our AI companions can work in tandem with us—like a farmer-ox pair—to help us accomplish our goals as a species.
Our conversation, edited for length and clarity, is below.
Max Kozlov: I’d love to start by asking what drew you to the idea that our relationships with animals could tell us something about our relationships with robots?
Kate Darling: I’ve always loved talking to people about robots, so it’s been my favorite thing to talk to people about, and I’ve just noticed over the past decade that in so many of the conversations I have—whether it’s with roboticists or people at a garden party—we’re constantly subconsciously comparing robots to humans, and artificial intelligence to human intelligence. That’s never made a lot of sense to me given that artificial intelligence works so differently from our own. It’s also made way more sense to me to look at the animal analogy because it changes so many conversations, and we’ve used animals as a partner in what we’re trying to achieve because their skillsets are so different [than ours]. It always bothered me that we are limiting ourselves and falling into this technological determinism that robots can, will, and should replace people, and I just feel like animals are such a salient analogy that everyone gets. [An animal] is also this autonomous thing that can sense, think, make decisions, and learn that we’ve dealt with previously. I finally had to just write a book on it.
Why do you think it’s so seductive to immediately go toward comparing robots to humans?
Honestly, it makes a lot of sense to me. We tend to anthropomorphize anything, especially things that have traits or behaviors that act like us in some way, so of course we project ourselves onto the robots. We project ourselves onto animals as well, so I think that’s where it comes from. I think that’s also where all of the science fiction and pop culture and narratives come from that further reinforce this idea.
It always bothered me that we are limiting ourselves and falling into this technological determinism that robots can, will, and should replace people.
In your book, you elaborate on a new kind of human relationship—our relationship with robots. What does that relationship look like in our current lives?
Basically, we’re very social creatures, and we develop all sorts of relationships to people, to animals, to objects, and now we have robots which are this weird thing that we subconsciously treat as if they’re alive. We clearly see from the research happening with the very primitive robots we have today that people can develop relationships to robots too, and what that can look like is naming their Roomba vacuum cleaner. They’re not going to treat it like a dog—it’s not that type of relationship—they’re not going treat it like a neighbor, but they are going to have some sort of attachment to it. The Roomba is a very simple example, but there’s also a whole category of robots that are specifically designed to develop a socioemotional connection called social robots. They’re still in a very primitive stage, but I think we’re going to see a lot more of that in the near future.
Speaking of social robots, in the book you mention that people are somewhat skeptical of these social robots—that people fear that they might replace human interaction—but it seems like you had a more optimistic view of the technology.
Again here, we’re falling into this fallacy of comparing robots to humans and automatically leaping to this assumption that they will replace us. So people are worried that their children are going to replace their friends with robots, or that the robots that we use therapeutically in nursing homes are there to replace care workers. But as soon as you say, “What if we look at this as a type of animal,” people are like, “Oh, so you know, if my uncle gets a dog, I’m not worried that [it’s] going to replace his human relationships—it’s a supplement.”
Robots aren’t exactly like animals either—they’re a new type of supplemental relationship that we can make use of.
As you mentioned, we tend to anthropomorphize animals and robots in a similar way, but there’s almost a sense that it’s silly to show empathy toward robots. Why does it feel so weird to be empathetic toward robots?
I think part of it is cultural because in Western society, we have this very strong divide between things that are alive and things that aren’t alive. Then there are other cultures, like the history of Japanese culture that contain a lot of Shintoism, which don’t have such a strict divide. In Japan, people are more willing to view a robot as something fluid. One of the things that I discovered while researching the book is that we used to do this to animals too. Before pets were really a big deal, we acknowledged that animals were alive, but we thought it was silly to have emotions or to develop attachments or—God forbid—give them any rights. I wonder if history is repeating itself in that we’re starting to see that people treat robots like living things but we still view them as silly.
Before pets were really a big deal, we acknowledged that animals were alive, but we thought it was silly to have emotions or to develop attachments or—God forbid—give them any rights.
As you mentioned, we’ve made a ton of progress in the past century in animal rights, but there are still plenty of examples of humans not treating animals quite so well, like how we treat animals in factory farming. Are you worried about a similar fate for robots?
I do think that’s most likely what’s going to happen. The same way that we’ve treated most animals like tools and products and then some of them have become our companions and we give them different treatment—that’s most likely what’s going to happen with robots as well.
I’m not saying that that’s what should happen, and in fact I do think that looking at the history of animal rights in the West and looking at some of the research on human-robot interaction really illustrates the hypocrisy that some of us have. We want to believe that we care about the intrinsic ability of animals to suffer or feel pain; yet our behavior and our laws reflect something completely different, which is that we might care more about this fluffy baby seal robot than we would about a living breathing slimy slug.
And why is it important to treat robots with respect?
Well there’s always been this argument for animal rights that isn’t about what the animals actually feel and is more about human behavior. The Kantian animal rights philosophy is if you treat an animal cruelly, then you become hardened—you become a cruel human yourself. This is an argument that we see repeated throughout history for all sorts of things. It’s also the violence and video games argument.
We don’t have good evidence that playing violent video games makes people violent, and similarly we don’t have this evidence with robots, and we don’t even really have it with animals. We know that less empathic people are more likely to abuse animals, but we don’t know if abusing animals makes you less empathic.
You write in your book, “What keeps me up at night isn’t whether a sex robot will replace your partner, it’s whether the company that makes the sex robot can exploit you.” As our lives become more and more intertwined with our AI companions, you offer some dystopian ways that these relationships could be monetized. Why are you worried about that?
I think the real problems are not with the robots themselves. The dystopia is not robots coming to take over or exploit us, it’s humans exploiting humans. I think that shift of focus is really important.
But we live in this incredibly capitalist society where we have a history of companies and governments using persuasive technologies. We have a whole industry built around trying to influence people’s behavior not for their own benefit but for someone else’s benefit—for companies’ benefit, and that can cause harm.
The dystopia is not robots coming to take over or exploit us, it’s humans exploiting humans.
Social robots are a very emotionally persuasive technology. I think that that’s really where we need to watch out, because we might very soon have companies trying to manipulate people through social robots. That could have implications for privacy and data security, it could have implications for this emotionally persuasive marketing. I think that there are a lot of issues that we are just not talking enough about, because we’re talking so much about robots “coming to replace us” that we’re not seeing these other things.
You say that even if we could replace a human workforce with robots in the next few decades, you hope that we will opt to create technology that gives us a partner in what we’re trying to achieve—kind of like a farmer-ox pair. What would that look like?
There are a lot of examples because it’s clear that robots can’t replace people right now. We have underwater robots that are helping scientists look at the effects of climate change, we have robots going into space where humans can’t go, we have robots doing dangerous jobs that humans shouldn’t be doing. But then also, in general, thinking about the technology as something that can help people do a better job, rather than trying to automate away “the pesky humans.”
I see a lot of the latter where you have Amazon warehouses and Ubers, where it’s clear that they’re companies that are just trying to automate people away rather than helping people do a better job. It seems like such a small difference, but it’s actually a huge difference because it requires thinking creatively about how we can actually combine the skills of robots or AI and humans to create something better. It might mean we have to rethink what manufacturing looks like or rethink what warehouses look like, but that’s what I want people to be thinking about.