Do we love robots more than animals?

by Kate Darling

A robotics expert argues that our relationships with robots cast light on our attitudes to animals.

Thirteen years ago, I bought a small robot dinosaur. Pleo, the latest and greatest in robot pets, was modelled on a baby camarasaurus, with green, rubbery skin, a large head and big eyes. The animatronic toy moved in a lifelike way, wandering around the room, crouching playfully and purring when patted.

I showed Pleo off to a friend, telling them to hold it up by the tail. My friend complied, gingerly plucking the dinosaur up off the floor. As the robot's tilt sensor kicked in, we listened to Pleo's motors whir and watched it twist and squirm in its rubber skin, eyes bulging. After a second or two, a sad whimper escaped from its open mouth. As the robot began to cry out with more urgency, the feigned distress started to make me uncomfortable. 'Okay, you can put him down now,' I said, my words punctuated with a nervous laugh. Then I petted the robot to make it feel better.

This experience with Pleo sparked my interest in how and why many people - not just me - treat robots like living things, despite knowing perfectly well that they're machines. What does this say about our psychology, our empathy and our relationship with creatures that are alive? For example, a lot of us believe we value the inner states of others. We think that the shared ability to experience things like suffering or joy is the main reason we relate to other beings. But our actual treatment of non-humans reveals a messier truth. In this essay, I explore whether our budding relationship to robots could actually help the animal welfare cause - by making it harder for us to ignore the contradictions between our values and our actions.

Empathy for machines

Historians and sociologists have long used non-human animals to think about what it means to be human. In a similar vein, research in robotics is starting to teach us a lot about ourselves. In the burgeoning field of human-robot interaction, researchers are finding that people will readily and willingly apply social conventions to their interactions with robots, from talking to them to trying to help them. And as more devices that can sense, think, make autonomous decisions and learn enter into shared spaces, we're seeing people develop emotional relationships to them.

When an engineer from robotics company Boston Dynamics kicked a biologically inspired, four-legged robot in a 2015 video, so many people had a visceral negative reaction to the footage that it went viral. But we see emotion even with more primitive robots: over 80 percent of Roomba robot vacuum cleaners have names. Some people will even send them in for repair and turn down the offer of a brand-new replacement, requesting that 'Meryl Sweep' be returned. Soldiers have reportedly risked their lives to save the robots they work with. And Buddhist temples in Japan hold funerals for robot pets when they've broken beyond repair.

Our tendency to project humanlike traits, behaviours and emotions onto animals, known as anthropomorphism, translates remarkably well to robots. In fact, it's incredibly difficult for us not to anthropomorphise robots, in part because we're biologically hardwired to project agency onto autonomous physical motion. Studies show that people will readily treat anything that moves like it's alive, from simple geometric shapes on a screen to a remote-controlled waving stick.

While empathising with Pleo made me feel silly, I was intrigued. I understood that empathy - how we feel and react towards another individual's emotional state - is a key component of our social interactions and psychology. A few years later, my friend Hannes Gassert and I tried to get workshop participants to destroy some of the robot baby dinosaurs, to great hesitation and dismay. Since then, through experimental work at MIT connecting people's behaviour to their tendencies for empathy, my colleagues and I have contributed to a large and growing body of research that shows that people empathise with the cues that lifelike machines give them, even if they know that they're not real. In other words, even though robots can't feel, we feel for them.

If people are capable of treating certain robots more like creatures than devices, what does that say about our relationship to non-human animals? While it may seem absurd, or even insulting, to compare lifeless machines to living creatures, I think there's something to be learned from the striking parallels in how western society has handled them both.

Tools, products, companions

When people want to get the same robot vacuum cleaner back from the repair shop, this suggests a future where we view certain robots as individuals - with their own personalities and quirks. And as we begin to subconsciously separate robots into social and non-social categories, it's worth noting that we have a long history of treating some animals as tools and products and some as our companions, putting them in a variety of different, often morally conflicting, roles.

For example, Hal Herzog's book Some we love, some we hate, some we eat, illustrates our paradoxical relationship to our fellow creatures. Herzog explains how we've separated non-human animals into plough haulers, spicy BBQ chicken wings and pampered-princess pets, with little regard for inherent traits or consistent animal rights philosophy.

The convoluted history of western animal protection laws also reflects an approach to animal suffering that is rife with inconsistencies. From protesting fur coats while eating burgers to saving the whales once we find out they can sing, we have long mobilised to protect other creatures selectively, passing laws on vivisection that shelter only specific mammals and banning the practice of eating some, but not others.

Based on everything I've learned about the history of animal rights in western society, I can only agree with Herzog's assessment that '[People] don't care whether the correct route to animal liberation runs through Bentham or Kant'. Theories of ethics haven't catalysed our animal welfare movements. Instead, it's social, emotional and cultural relationships that cause some cow-consuming communities to baulk at the mere thought of eating horses, while other societies don't blink an eye at either. And it's often our empathy for relatable, humanlike characteristics that makes us want to protect certain animals over others. Sometimes that's an octopus being intelligent enough to escape from its tank to get food, other times it's the size of a puppy's baby-like head. In fact, we've 'designed' some of our pets to look and act in ways that are more appealing to us, following some of the same principles that we will use to design robots.

Our crisis of values

Judith Donath, in her essay The robot dog fetches for whom? describes a boy playing fetch with a robot dog. She writes:

"The entire point of playing fetch with a dog is that it is something you do for the dog's sake: it's the dog that really likes playing fetch; you like it because he likes it. If he doesn't like it because he's a robot and, while he acts as if he's enjoying it, in fact, he does not actually enjoy playing fetch, or anything else: he's not sentient, he's a machine and does not experience emotions  - then why play fetch with a robot dog?"

And yet, people already do play fetch with robot dogs, and happily. Much of the research in human-robot interaction offers an uncomfortable answer to Donath's question: the dog's intrinsic joy is not the only reason we play fetch. It may not even matter very much to us whether the dog can feel. Instead, we may simply be drawn to anthropomorphic social responses, like a wagging tail and 'smiling' face. That's reason enough to keep throwing the ball.

This is pretty inconsistent with what a lot of us feel are our values. People like me want to believe that we care about others' inner worlds and inherent dignity. We want to believe that we are kind and that our kindness comes from empathy that isn't primarily about ourselves. Surely, we say, we care about other creatures and their experiences and not only about what makes us feel good. But we kill an estimated one trillion fish every year, and animal rights activists have had very little success convincing the general public to care about the inhumane ways in which we slaughter our non-mammalian friends from the sea. Do they feel pain? Do we want to know?

That said, we may be at an important point in time. Because when people prefer - and they will - an unfeeling mechanical device like a pet robot dinosaur to a living, breathing, slimy slug, this juxtaposition makes it harder for any of us to ignore how we instinctually treat non-humans - whether they are alive or not. Thus far, we have been able to hold these contradictions between our beliefs and actions towards animals by not thinking too hard about them, justifying them tenuously. But the research in human-robot interaction shines a harsh and insightful light on our motivations and behaviour. Confronted with this information, I wonder, could we choose to stop treating animals and machines so similarly?

Anthropodenial

When people take to Twitter to mourn decommissioned Mars rovers and the media reports on a global outpouring of sympathy and support for a vandalised hitchhiking robot, it's hard not to feel that our emotions are misplaced. Some argue that the path towards consistency is to simply discourage anthropomorphising these machines and start treating robots like the tools they are. But I don't think that the right way forward is to stop empathising with robots. Our anthropomorphism and emotional bonds are innate to us and they may actually serve a purpose.

In fact, animal research has grappled with a very similar issue. For a long time, the science community vehemently discouraged anthropomorphising animals. Anthropomorphism was so controversial that it was called naive, sloppy, even dangerous, an 'incurable disease'. But the contemporary animal science community has come to a different point of view. For example, Dutch primatologist Frans de Waal has argued that rejecting anthropomorphism actually hinders animal science, coining the term 'anthropodenial'. Even though anthropomorphism is biased, some scientists are convinced that dismissing it outright is a mistake.

I am also beginning to embrace this position. Today, when people tell me that they feel for their robot vacuum, I don't think it's silly at all. When I see a child hug a robot or a soldier risk life and limb to save a machine on the battlefield, or a group of workshop participants refuse to destroy a baby dinosaur robot, I see people whose first instinct is to be kind. In fact, some of our research in human-robot interaction suggests exactly this: that less empathic people don't care very much about anyone or anything, while empathic people are more likely to be kind to robots and animals and humans alike.

A 2019 study by Yon Soo Park and Benjamin Valentino showed that positive views on animal rights were associated with positive attitudes towards improving welfare for the poor, improving conditions of African Americans, immigration, universal healthcare and LGBTQ+ rights. Americans in favour of government health assistance were over 80 percent more likely to support animal rights than those who opposed it, even after the researchers controlled for political ideology. Our empathy may be complicated, self-serving and sometimes misguided, but I'm not convinced that it's a bad thing.

Contemporary philosopher and animal rights proponent Martha Nussbaum argues in her book Upheavals of thought: The intelligence of emotions that emotions are not actually 'blind forces that have no selectivity or intelligence about them'. They are a valuable part of our thinking, she says, because they are able to teach us and help us evaluate what is important. Perhaps the best approach we can take towards robots is the same approach that some animal researchers have suggested we take with anthropomorphism in the natural world: to accept our instinctive tendency and let it guide us, but with enough awareness and intentionality to apply it appropriately and, most importantly, to ask what we can learn from it.

To me, one of the biggest lessons we can learn from embracing our anthropomorphism towards robots is, counter-intuitively, that we should be treating animals much better.

A different path forward

The robots are coming. The International Federation of Robotics expects companies to sell over 68 million robots for professional, personal and domestic services in 2022. We can already see that we relate to robots as a new breed of thing - somewhere in between object and being. And as these autonomously roaming, crawling and flying machines are increasingly woven into the fabric of our societies, they will teach us more about who we are. In this new era of human-robot interaction, our technology will increasingly hold up a mirror that invites us to be confronted with our human nature.

My hope is that the political, moral and emotional choices we will be facing with robots actually prompt a reckoning with our current (mis)treatment of animals. We know that animals are alive and can feel, while robots suffer no differently from a kitchen blender. And even though the western world has been very selective about accepting scientific evidence that animals feel pain and have inner worlds and experiences, it-s the main difference most of us would draw between animals and machines when asked. This presents a unique opportunity for awareness.

Robots expose the piece of ourselves that has distracted us from giving non-human animals inherent recognition. It is only by understanding and acknowledging this truth that we can chart a path towards real change. We now have an opportunity to look into the mirror and guide our actions towards a place where they are more consistent with what we want to value.

Perhaps this includes being kind to others, whether they are made of flesh or metal. And (not but), most people would agree that robots and animals are not - and should not be treated - the same. That we should care about the fact that non-human animals live and breathe and feel. If this is the case, then now is the time to insist on animal treatment that is more respectful of the lived experiences so many of us believe should matter.

Kate Darling

Dr Kate Darling is a leading expert in technology policy and ethics at MIT, where she studies human-robot interaction. She is the author of The new breed: What our history with animals reveals about our future with robots.