In a scene right out of 'The Good Place',
researchers have asked mil-lions of people across the world
what they think a driverless car should do in the face of an
unavoidable
accident.
Published today in Nature, the study used data from MIT's Moral Machine, which has gath-ered over 40 million decisions in ten languages from across 233 countries.
Each scenario required making choices between various combinations of saving pas-sengers or pedestrians and the researchers identified a number of shared moral prefer-ences. These included sparing the most number of lives, prioritising young people and valuing humans over other animals.
The SMC asked New Zealand experts to comment on the study.
Associate
Professor Alex Sims, Department of Commercial Law,
University of Auckland,
comments:
"As the article
argues, the question is not if driverless cars will start
being driven on our roads, but when. Autonomous cars raise
the issue of the trolley problem, which was once just a
thought experiment in ethics. You see a run trolley (in New
Zealand we would de-scribe it as a train carriage), moving
towards five people lying on train tracks. Next to you is a
lever that controls a switch. If you pull the lever, the
trolley will be diverted onto an-other set of tracks, saving
the five people. But, there is one person lying on the other
set of tracks and pulling the lever will kill that person.
Which one is ethically correct?
"Autonomous cars raise the stakes. If a crash is inevitable - for example, an autonomous car's brakes fail, and the car has to choose between running over and killing three elderly people or swerving into a brick wall and killing the car’s occupants - what should the car do? The authors quite rightly state that we, as a society, cannot leave the ethical princi-ples to either engineers or ethicists.
"We need rules. It would be unconscionable for people to drive cars that were pro-grammed to ensure that the occupant’s safety was put ahead of everyone else’s. For ex-ample, a car cannot be programmed to run three people over to avoid the car’s sole occu-pant crashing into a parked car."
No conflict of interest declared. Dr Sims' full comments are available as a blog post on sciblogs.co.nz.
Professor Hossein
Sarrafzadeh, Adjunct Professor, High Tech Research, Unitec,
comments:
"While technical
aspects of driverless cars have seen great advancement, the
social as-pects have not been studied well. Social
scientists will certainly focus on ethics of tech-nology
including driverless cars as we get closer to wider use of
this technology in the next few years. Cultural aspects of
driverless cars and other artificially intelligent systems
like emotion recognition systems have not been studied
sufficiently either and there is a great need for research
in these areas globally and in New Zealand.
"One aspect of driverless cars that is not taken into account in various studies of the so-cial dimensions of this technology is the fact that future roads may not be the same roads we are using today. Even if we use similar roads they will be heavily sensored, intelligent roads. They will certainly be much safer, although these ethical dilemmas will remain if the same roads are used. Future roads, I believe, will be different to what we have now. There may be no humans walking across the roads that autonomous vehicles travel in."
No conflict of interest declared.
Associate Professor Colin Gavaghan, New
Zealand Law Foundation Chair in Law & Emerging Technologies,
Faculty of Law, University of Otago,
comments:
"These sorts of
‘trolley problems’ are philosophically
fascinating, but until now, they’re rarely been much of a
concern for law. Most drivers will never have to face such a
stark dilemma, and those who do will not have time to think
through consequentialist and deontological ethics before
swerving or braking! The law tends to be pretty forgiving of
people who respond instinctively to sudden emergencies. The
possibility of programming ethics into a driverless car,
though, takes this to another level.
"That being so, which ethics should we programme? And how much should that be dic-tated by majority views? Some of the preferences expressed in this research would be hard to square with our approaches to discrimination and equality – favouring lives on the basis of sex or income, for instance, really wouldn’t pass muster here.
"Age is also a protected category, but the preference for saving young rather than old lives seems to be both fairly strong and almost universal. So should driverless ethics re-flect this?
"Even that preference seems likely to raise some hard questions. At what point does a ‘child’ cross the threshold to having a less ‘valuable’ life? 16? 18? Is an infant’s life more precious than a toddler’s? An 8-year-old's? Expressed like that, the prospect of building a preference for ‘young’ lives looks pretty challenging.
"One preference that might be easier to understand and to accommodate is for the car to save as many lives as possible. Sometimes, that might mean ploughing ahead into the logging truck rather than swerving into the group of cyclists. Most of us might recognise that as the ‘right’ thing to do, but would we buy a car that sacrificed our lives – or the lives of our loved ones – for the good of the many?
"Which brings us to the role of law in all this. Maybe it just shouldn’t be legal to buy a car that would discriminate on protected grounds, or that would sacrifice other people to pre-serve our own safety. But in that case, how many people would buy a driverless car at all?
"What if we left it up to individual choice? Could driving a ‘selfless’ car come to be seen as an indication of virtue, like driving an electric now? Would drivers of ‘selfish’ cars be marking themselves out in the opposite direction?
"Maybe the biggest issue is this: over a million people die on the roads every year. Hun-dreds die in New Zealand alone. Driverless cars have the potential to reduce this dramat-ically. It’s important to think about these rare ‘dilemma’ cases, but getting too caught up with them might see us lose sight of the real, everyday safety gains that this technology can offer."
No conflict of interest.
ends