Here’s a scenario. You’re sitting in an autonomous car on the way to work. You’re in the driver’s seat but the car is taking care of everything – steering, braking, accelerating and changing lanes - leaving you free to relax. Suddenly, a child steps into the street. There's no time for you to regain control, so the car has to make a decision - intially, it wants to swerve to one side to avoid the child, but that would mean mounting the pavement where other pedestrians are walking.
So there’s a dilemma. Does the car choose to swerve around the child and risk injuring more people, or does it sacrifice the few to save the many and hit the child? It may sound like an unlikely scenario but it’s exactly the sort of problem we’re going to need to overcome before fully autonomous cars become mainstream. As most experts believe that will be around the year 2025, the clock is well and truly ticking.
Let’s say you’ve managed to solve the problem. You’ve found the solution and essentially programmed a sense of morality into an inanimate object. You now need to make sure that the car will take that action 100% of the time, and that means ensuring your software is in every single autonomous car on the planet. The consequences of not doing this are dire – we could end up with one group of manufacturers whose cars would avoid the child, and another group that wouldn’t, and that's before you consider the laws of different countries and territories. I have no doubt legislation would pass very quickly to ensure everyone uses the same software – netting the company that creates it billions of pounds.
The trouble with this approach is two-fold. Firstly, carmakers are notorious for not working together on this scale – and asking everyone to use the same software would likely be a troublesome affair. Secondly, we’re about to enter a period where autonomous cars will be using the road alongside traditional models, making uniform decisions on behalf of the car nearly impossible.
Work is already under way to decide how to approach the issue of autonomous morality. In the US, the Massachusetts Institute of Technology (MIT) has launched what it’s calling the ‘Moral Machine’ to see how members of the public would tackle the type of situations self-driving cars might face. You can find the progam here, and your decisions could help to influence the actions of autonomous cars in the future.
So how would you solve the scenario above? Answers on a postcard – or, preferably, below, please.
Join the debate
Add your comment
Risk vs progress
The autonomous vehicle isn't the problem!
we don't need them.