© Getty images

Robots are already with us - how we treat them tells us a lot about ourselves, here's why

Just be kind guys.

Try 6 issues for £9.99 when you subscribe to BBC Science Focus Magazine!
Published: July 15, 2023 at 3:00 pm

Years ago, someone asked me for advice on a workplace situation. His company was using an internal chatbot to help new employees, and he had repeatedly noticed one person being disproportionally verbally abusive to the chatbot.

“What do you think?” he asked me, “Is this an HR issue?” The truth is, we don’t know. Even though machines can’t feel, it’s worth thinking about what human behaviour is ok.

Over the next decade, our relationships to our devices will become a lot more interesting. Advanced chatbots and robot companions are on the rise, both are extremely well-suited to tap into our social nature and make us behave as though we’re interacting with something… alive.

This urges the question: what does it mean to be verbally or physically violent toward an artificial agent?

People have already started to wonder. For example, during the mass adoption of virtual voice assistants, parents expressed concern that the little speakers in their living rooms were teaching their kids to be rude.

Major companies like Amazon and Google responded by releasing opt-in features that encouraged the use of please and thank you to prevent children from barking commands at the devices.

Of course, it’s not the machines we’re hurting, so the main concern is that 'mistreating' an artificial agent will lead to bad behaviour in other contexts.

In 2015, my colleagues and I took a small step toward investigating this idea. By studying the connection between people’s empathic concern and how they were willing to treat a robot. Also, lots of research shows that people who witness violent behaviour toward a robot feel distress.

But even if there’s a link between people’s tendencies for empathy and how they feel toward a robot, that doesn’t answer the question of whether beating up robots makes people more violent.

Society has asked similar questions about porn and video games, with some inconclusive results. In many cases, people seem to do fine at compartmentalising. Just because I play Grand Theft Auto doesn’t mean I try to run people over in the parking lot at work.

Perhaps video games are mostly harmless, but does a robot with a body change the equation? We’re physical creatures and studies show that we behave differently toward embodied robots than characters on a screen, in part because we’re biologically hardwired to react to physical motion.

People will readily treat any agent that moves like it’s alive, even a randomly moving stick in a research study. As robot design gets better, the line between alive and lifelike may continue to blur in our subconsciousness.

If so, maybe it would be great for people to take out their aggression and frustration on human- and animal-like robots that mimic pain, writhing, and screaming. After all, they aren’t harming a living being, so it might be a healthy outlet for violent behaviour.

On the other hand, it could be bad if it desensitises people to violence in other contexts. Would a child who grows up kicking a robot dog find it easier to kick a real dog?

Unfortunately, desensitisation remains a difficult thing to study. It’s hard to connect long-term behaviour changes to an exact cause. Some limited research has tried to get at the question with regard to robots and language-capable agents, but on the whole, we don’t have a very solid answer.

The idea that being cruel to a robot could make us crueler is akin to Kant’s philosophy on animal rights (which was not about protecting the animals themselves). And it’s only a good argument if we have enough evidence to back it up.

After all, if being cruel to robots doesn’t actually turn people into sociopaths, there’s less reason for concern. But maybe Kantian philosophy isn’t the only way to think about it.

Philosopher Shannon Vallor, in her book Technology and the Virtues, offers a slightly different approach: “From the perspective of virtue ethics, people who spend most of their free time […] torturing robots […] are not living well of flourishing, because they are not by this activity cultivating any of the character traits, skills, or motivations that constitute human excellence and flourishing.”

Instead, she argues we should encourage activities that help people live out character traits we see as good and admirable.

For now, it seems pretty reasonable to keep robot abuse away from impressionable children, at least until we have more research on the effects. But even for the rest of us, maybe it’s not cool behaviour to treat an artificial agent poorly.

Yes, it’s much better than mistreating a living, breathing being, but why do it at all? As Vallor argues, it might be worth practicing kindness, instead.

Read more robots: