On Sunday, March 18th, at approximately 10pm, an Uber was operating in self-driving mode when the vehicle struck and killed 49-year-old Elaine Herzberg. Initial reports say that Elaine was walking her bicycle across the street when the Uber struck her. At this time, facts are limited, but at a press conference in Tempe, AZ, officials said that Elaine was crossing the street outside of a crosswalk area when the Uber struck her while traveling at approximately 40 mph. I suspect that the car's sensors did not identify Elain or her bicycle as an obstacle or pedestrian.
In this episode, Robert Solano explores this incident and discussed the potential impacts to the development of self-driving car technology.
Phoenix has been a great test environment for companies like Waymo and Uber. The moderate climate makes it ideal for early-stage development. In addition to the low cost of living, the "business friendly and low regulatory environment" provides an attractive alternative to the very expensive and dense Silicon Valley and San Francisco area where many tech companies are headquartered.
Uber had been testing their self-driving fleet in Arizona in addition to Pittsburg and other areas. In total, the fleet of Uber self-driving vehicles had accumulated approximately 3 million miles at the time of this accident. In other words, the current accident fatality rate of the Uber fleet is about 1 fatality in 3 million miles driven.
In contrast, the national fatality rate for human drivers is about 1.18 deaths per one hundred million miles driven. MAny advocates of self-driving cars claim that they will have better safety rates than humans, but this fatality puts self-driving car technology at a steep uphill battle to match the human fatality rate.
This accident illustrates three key elements of self-driving car technology and development.
1) There are weaknesses to single driver fleets.
Waymo CEO John Krafcik boasted that Waymo was trying to develop the most experienced driver ever. He claimed that all of the vehicles in the Waymo fleet were essentially the same driver and shared their lessons learned and experiences with each other. At first, that idea seems great, but there are serious implications if that hardware or software has a flaw.
As a result of this accident, the entire fleet of Uber self-driving vehicles is suspended from further road testing. Since Uber has a small fleet of these vehicles, suspending their services is not a big deal. But in the future, if a company has a much larger fleet, then suspending a service could potentially leave customers without a ride-hailing service
Today, there are tens, if not hundreds, of thousands of Uber cars in operation around the world. Many people depend on them for transportation. There would be a severe econmic impact if all of the Uber (or Lyft, or Taxis) all suddenly stopped operating simultaneously.
The one-driver model sounds good for machine learning, but it severely reduces the redundancy that is inherently built into our current system where each human is unique and we all operate thousands of different are makes and models.
2) This accident will delay the deployment of self-driving technology by 12 to 18 months.
Opponents of self-driving technology will use this incident as a perfect example to push for stricter regulations, testing, and monitoring of self-driving car development. Additionally, companies will become more cautious. The result is that the deployment of self-driving car fleets will be delayed 12-18 months.
3) Company ethics are crucial to self-driving car technology.
Too many academics are focused on the ethics of machine decision making. They talk about topics like