Driverless Radio

13. Driverless Cars Will Decide Who Survives in a Crash—Why I Hate The Trolley Problem 13. Driverless Cars Will Decide Who Survives in a Crash—Why I Hate The Trolley Problem - Driverless Radio


Listen Later

For almost 100 years, scholars have been debating the Trolley Problem. The scenario is simple.  In its original form, a trolley or train was speeding down a set of railroad tracks uncontrollable and unable to break. Ahead of the trolley is a Y-intersection. On one fork of the intersection, 5 innocent people are tied up on the tracks. On the other side of the intersection, a child is tied up on the tracks. At the intersection, a man could allow the trolley to continue its course, undoubtedly killing 5 innocent people, or the man can divert the train and kill only the one child.... a child that happens to be his own flesh and blood. Should the man sacrifice his own child to save the lives of 5?

This scenario, or others like it, have been used in philosophy and ethics classes for decades. Most recently, institutions like MIT have updated the problem to include driverless car scenarios.



The MIT Moral Machine lets users make decisions as if they were an autonomous car. Over the course of many different binary, one-or-the-other scenarios, users can select if they would rather kill a grandma or a dog, a man or a group of cats, a bank robber or a doctor, etc.
I absolutely hate the trolley problem and all its incarnations. 
The trolley problem was originally designed to provoke ethical discussions in human decision making. It should be left in ethics classes and left out of engineering discussions.

In its original form, there is nothing natural or ethical about the scenario of an out of control trolley barreling down the track towards a group of five people tied up.  It would be impossible to predict how anyone would act in that situation. The real focus should not be on the decision to direct the train, but instead, the focus should be on the sadistic bastard that cut the train's brakes, tied up six people to the track, and coerced a person to choose between the life or death of their child. That is a seriously screwed up criminal. Let's talk about criminal ethics instead of autonomous ethics.

Just how it is impossible for the trolley problem to every happen without a criminal intent, it is also impossible for a car to be traveling down the street and need to decide if it wants to kill a group of five old ladies or a group of five bank robbers.

I hate the trolley problem for three main reasons.

#1. Attempting to apply the trolley problem to teach or evaluate autonomous cars demonstrates a fundamental lack of understanding of machine learning and artificial intelligence. These cars will have many different sensors. They are looking for obstacles. They will never be programmed to rank order the value of old ladies versus children. To a car, they are both obstacles that will be sensed by a variety of cameras, radar, lidar, etc. This data feeds into an artificial intelligence machine that uses complex algorithms and probabilistic models to select the best course of action. They will almost never have to make simple binary either-or, life-or-death decisions as postulated in the trolley problem.

#2. The chances of a car getting into a situation where it needs to decide in a split second which of two obstacles it will hit are nil without criminal intent.

#3. In almost all scenarios, the best, most ethical, simplest, safest course of action for a driverless car will be to brake. It is that simple. If a car cannot navigate around an obstacle, it should brake. These cars will have reaction times and sensors that exceed human capability (I saw this first hand in Las Vegas). They will be able to sense obstacles long before humans and will almost always have the ability to brake before hitting one.
...more
View all episodesView all episodes
Download on the App Store

Driverless RadioBy Team Solano