Autonomous driving: what’s the role of the human?

Autonomous driving: what’s the role of the human?
2 July 2015 frances

 Truly autonomous cars are becoming a more tangible reality. But will humans ever really trust a machine with their safety?

The race is well and truly on between governments to secure a stake in the future of autonomous driving. The UK Government has pledged £100m to fund research in the hope that autonomous cars and the systems that connect them to the internet will deliver a £51bn annual boost to the economy by 2030.

In February, the Department for Transport published ‘The Pathway to Driverless Cars’, a report that lays out the Government’s plans to facilitate the testing and production of driverless vehicles. The report envisions the development of “vehicles in which the driver can choose to use their travel time in ways that have never previously been possible”. However, research to date has shown that making this vision a reality is not just a matter of developing the right technology and putting it on the road. People are reluctant to trust an autonomous system, for fear that it will go wrong and they will be blamed for it. So what is the role and the responsibility of the human in the autonomous driving system?

Sit back and relax?
The technology is currently at a point where a car can drive itself around a track or quiet road with ease but it’s not yet able to cope with a busy and confusing environment such as a city. Trying to integrate all the systems of a car to work in synch, safely, while monitoring an unpredictable and sometimes choatic outside environment is a huge challenge. Mike Bell of Jaguar Land Rover argues that “a truly autonomous car would require the development of artificial intelligence”. Until such time as this artificial intelligence is available, the human will be part of the system. Just what role the human should play, though, is not entirely clear.

Neville Stanton, fellow of the CIEHF and professor of human factors in transport at the University of Southampton argued recently in The Conversation that “the hands-off vision of the autopilot for cars is marred by concerns about the situational awareness of the driver, how they would take control in case of an emergency and…the extent to which the human driver will be responsible for the vehicle”. The DfT report states that a code of practice is to be developed for operators of driverless vehicles and that failure to follow the code will be “a clear indicator of negligence” and will “carry considerable weight in any issue of liability”, indicating that for the Government, at least, the human is still the one responsible if anything goes wrong.

One goal is for drivers to be able to read a book or check emails while the car drives them wherever they want to go. But if they are also responsible for the vehicle, how much do they need to monitor it? Evidence from research, Stanton states, shows that people quickly become distracted from a monotonous task like monitoring an autonomous system. Simulator studies have shown that drivers of automated vehicles do not recover from emergencies and that they fail to intervene when automatic systems go wrong. Research in aviation shows that automation confusion – a situation where a human operator can’t tell when a system has gone wrong or can’t understand the information being relayed from an automatic system – is a significant problem, leading to human operators making extremely risky and sometimes fatal choices when faced with conflicting or confusing information. Stanton’s work has indicated that in order to keep the human driver engaged, autonomous vehicles need to give continuous feedback, analogous to a chatty co-driver. However, in that circumstance the driver is still involved in the driving process, and can’t really engage in other activities.

Do autonomous cars need to be infallible?
Google has been testing driverless cars since 2009 and has concluded that since people can’t monitor the driving situation and do other activities at the same time, it’s best to focus on taking the human out of the system entirely. Their thinking is that even though driverless cars aren’t infallible, they are more reliable than humans. Chris Urmson, director of Google’s self-driving car programme, stated in an article in May that their driverless cars had travelled 1.7 million miles, with the cars doing the driving for nearly a million of those miles and in that time, the cars had been involved in 11 minor accidents. Not once, Urmson claims, was the self-driving car the cause of the accident. In spite of such a strong track record, Urmson’s view is that accidents will happen, even with super-reliable autonomous cars and indeed in June, a Google driverless car did have a near miss with a car operated by their competitor in driverless research, Delphi automotive. Urmson believes that because autonomous cars don’t get distracted, don’t misjudge distance and don’t drive while tired, overall they will be safer than human drivers, even if things do go wrong now and again.

He may have a point. On average 1888 people a year were killed on UK roads between 2009 and 2013. Once other non-fatal accidents are taken into account, it’s safe to say that human’s aren’t great drivers. Even while we are totally responsible, engaged, hands on the wheel, we’re not very good at preventing accidents. Therefore, logically, it would seem that fully autonomous cars that aren’t 100% reliable, but that do dramatically reduce accident rates, are worth developing. However, car manufacturers believe that for an autonomous car to be accepted by the public it doesn’t just need to drive like a human, or better than a human, it needs to be infallible. Marcus Rothoff, Volvo’s director of autonomous driving says: “Cars should not crash. That’s the only acceptable way of thinking if people are going to get into autonomous cars. It is core: people have to trust the technology and the system.”

For the time being, the state of the technology means that it is necessary for humans to be alert and responsible for an autonomous car at least some of the time. It is the opinion of Google at least, that the time will come when there will be no human driver at all, simply passengers. However, while the technology may advance to allow such a thing to be possible, whether humans will actually accept that and really trust a car with their safety is a question yet to be answered.

Frances Brown

This article appears in the July 2015 issue of The Ergonomist, the membership magazine of the CIEHF.