Currently reading: Autonomous cars: who should they save in a crash?

Artificial intelligence decision-making programming is judged by the public in a new study from a US university

The public’s perspective on which decisions autonomous cars should make in fatal situations is being surveyed by the Massachusetts Institute of Technology (MIT).

MIT’s ‘Moral Machine’ poses numerous scenarios to the public in which an autonomous vehicle would need to decide who to kill. Respondents are given two choices, and in each, lives must be lost – there is no non-fatal option. To make each scenario and the victims of each clear, a written explanation is provided, in addition to the graphic demonstration. 

Individuals’ answers are then compared to the answer patterns to gauge where their answers fit on a series of scales, depending on different circumstances within the scenarios. 

For example, the results compare whether the individual favours young people over the elderly, protecting those upholding the law rather than those flouting the rules (for example, if a pedestrian walks into the road when the crossing light indicates not to cross), or protecting passengers in the autonomous vehicle rather than other road users.

Patterns have already appeared in users’ answers, including strong preferences towards saving the lives of younger people, people with ‘higher social value’. In the given examples, a doctor represents someone with high social value and a bank robber has low social value.

Another strong preference, unsurprisingly, was to save human lives, rather than the lives of pets. A near-50/50 split was reached in users’ preference between saving passengers’ lives, or other potential victims’ lives, as well as protecting physically fit people rather than overweight people. 

Mm2

Sahar Danesh, IET principal policy advisor for transport, said: "The machine will always make the decision it has been programmed to make. It won't have started developing ideas without a database to tap into; with this database decisions can then be made by the machine. With so many people's lives at stake, what should the priority be? It's good to test it in this virtual context, then bring in other industries. 

Back to top

The technology hasn't got as far as decision making software yet, and the regulation surrounding them is not yet in place, which is why these virtual platforms are so important. There has to be a platform and a consultation process before the programming is completed; bring in the insurance industry, legal experts, health professionals and ethical professors to clarify the debate. The more people we can bring together to help make these decisions, the better. Then the algorithms can be made. 

Machine errors are always judged more harshly than human errors, so this is a good opportunity to develop the moral criteria that would go into developing autonomous cars. It's good to gather intelligence to teach a machine ethics; human beings make decisions based on instinct, but a machine doesn't know how to do that. We need to gather this data to design programs to help it make decisions that a human would do - or ideally do."

Mm1

The effectiveness of autonomous technology was called into question earlier this year, after a fatal collision occurred while Tesla's autonomous Autopilot software was activated. The UK government has also held a public consultation on autonomous cars and their future on Britain's roads.

The UK is to host the first autonomous vehicle track day, as autonomous vehicles become more prevalent on road and track.

Join our WhatsApp community and be the first to read about the latest news and reviews wowing the car world. Our community is the best, easiest and most direct place to tap into the minds of Autocar, and if you join you’ll also be treated to unique WhatsApp content. You can leave at any time after joining - check our full privacy policy here.

Join the debate

Comments
17
Add a comment…
tksh-car 4 October 2016

Question unanswered: who is to blame?

The big question, regardless of the decision made by the autonomous car: who is to blame in case of a fatality? The owner of the car, the manufacturer, the government or the software-designer? Today, the blame is on the owner, but why should I be held responsible for the Software of my car, that chooses to kill someone innocent instead of me? On the other hand: why the developers or someone else? I don't see a realistic solution right now; instead, everyone involved is going to try to blame someone else - and that means a significantly bigger risk to the potential victims of the self-deciding car.
rayperea 16 September 2016

Always save the passengers

There's been a lot of controversy over this issue, but there's something nobody has considered yet... (at least not that I've seen anywhere)

If we program self driving cars to save the pedestrian(s), then what's to stop malicious pedestrians from purposely walking out in front of a self driving car in order to murder the passengers of the car and get away with it.

All they have to say is.... I didn't see the car coming. And there would be no way to prove otherwise.

I know some may think it's unethical, but the truth is... people are unpredictable and malicious, and unless we come up with a way for the car to read the thoughts of the pestrian it's about to hit, the car should always choose to protect those who we know are faultless... the passengers.

If we tell self driving cars to protect pedestrians, we are just giving criminals an easy way to commit murder and get away with it.

lamcote 10 August 2016

Lake District!

Thinking about it, there is no way in the world I would ever go to anywhere like the Lake District in a car designed by winniethewoo. The risks would be massive. Many of the roads are next to severe drops, rivers, lakes etc and there are always lots of people walking along the roads. One slip up by some pedestrians crossing the road and my car will automatically drive me to my doom, because of their mistake, without any way to stop it! Thanks.