© Getty Images

Human drivers should not be responsible for accidents caused by autonomous vehicles

New UK legislation will put the blame where it should be – on the vehicles' manufacturers.

Try 6 issues for £9.99 when you subscribe to BBC Science Focus Magazine!
Published: September 8, 2022 at 5:00 am

In August 2022, the UKgovernment announced a £100m plan to speed up the development and deployment of self-driving vehicles. The plan also calls for new safety regulation, including a bold objective to hold car manufacturers accountable. This would mean that when a vehicle is self-driving, the person behind the wheel will not be responsible for any driving errors. This rule stands in contrast to the US, where courts have faulted human ‘backup drivers’ for robot-caused accidents. The UK has the right idea—as long as companies don’t weasel their way out.

Fully self-driving cars have been on the horizon for quite some time, but are taking much longer to hit the roads than promised. Despite pouring massive resources into research and development, car companies have struggled to account for the sheer amount of unexpected occurrences. Freakish weather is one thing for the vehicles to contend with, but in July 2021 there were news stories of a self-driving car mistaking the sunset for a traffic light and another one driving straight into a parked $2m aircraft. So far, the large rollout of automated vehicles the UK is hoping for has remained elusive.

Cars are being outfitted with increasingly advanced driver assistance features - like automated steering, accelerating, and braking. These assisted driving systems mean that, until we have reliable full automation, we’re going to be dealing with human-robot teams behind the wheel. It also means that when mistakes happen, we need to be particularly careful about who to hold responsible and why.

Robots and humans have different, often complimentary, skill sets. When it comes to driving, robots excel at predictable tasks and can react faster and more precisely than a human. People, on the other hand, are great at dealing with unexpected situations, like an erratic traffic cop or, as we saw in August, a horse-drawn carriage on the highway. The ideal – at least in theory – would be to combine the skill sets of humans and robots to create a better, safer driving experience. But in practice, creating an effective human-robot team in the driver’s seat is extremely challenging.

One of the cases I teach in class is a 2018 accident in Arizona, where a self-driving Uber struck a woman who was wheeling a bicycle across the road. The car’s automated system couldn’t decide whether she was a pedestrian, a bicycle, or a vehicle, and failed to correctly predict her path. The backup driver, who didn’t react in time to stop the car, was charged with negligent homicide. An investigation by theNational Transportation Safety Boardidentified a number of reasons the hand-off of control from vehicle to driver didn’t work, but Uber was not held responsible.

A contributing factor may be what anthropologist Dr Madeleine Clare Elish of the Oxford Internet Institute calls the “moral crumple zone”. In class, I present the Uber case as a hypothetical. I include hints about human attention spans, and I don’t reveal what the driver was doing (watching Netflix on her phone). Even with the case skewed in the driver’s favour, about half of the students choose to fault her instead of the car company. According to Elish, this is because people tend to misattribute blame to the human in human-robot teams.

We need to resist this bias, because the research on automation complacency is clear: when a car is doing most of the driving, it’s too much to ask of the person in the driver’s seat to be vigilant. For this reason, the UK has the right idea. Letting the driver off the hook will also set strong incentives for companies to figure out safety in advance, instead of offsetting some of the cost to the public.

For example, Tesla UK explicitly states that the Tesla autopilot features “do not make the vehicle autonomous” and that “full self-driving capability [is] intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment.”

If a disclaimer doesn’t shield them, another way car companies might skirt responsibility is by using systems that don’t meet the definition of ‘self-driving’. Which would mean going back to more hand-offs between car and driver – and more drivers blamed when something goes wrong.

With the UK investing so much capital in self-driving, we may ultimately see some new and improved technology, and a rollout of robot vehicles on predictable routes. Despite the fairly slow pace of development and deployment, it’s an exciting prospect.

A study carried out at Stanford Law school in 2013 found that, with traditional cars, more than 90 per cent of road accidents are due to human error, so one thing is clear: in the future, streets filled with autonomous drivers will be much safer. The only question is how we handle the long and winding road to get there.

Read more about artificial intelligence: