The car saw her. It decided not to stop. 🚗💥 We investigate the Uber ATG fatal crash in Tempe, Arizona. We break down the NTSB report on the death of Elaine Herzberg, revealing that the AI detected her 6 seconds before impact but failed to brake because engineers had disabled the emergency system to prevent "jerky rides."
1. The "Action Suppression": It wasn't blindness; it was a choice. We analyze the logs. The Lidar saw Herzberg, classifying her as an "unknown object," then a "vehicle," then a "bicycle." We explain the "Action Suppression" logic, where Uber tuned the software to ignore "false positives" (like steam or plastic bags) to make the ride smoother, effectively creating a car that refused to brake.
2. The Moral Crumble Zone: Blaming the human for the machine's fault. We expose the scapegoating. We discuss the concept of the "Moral Crumble Zone"—designing a system where a human is nominally "in charge" just to absorb the legal liability when the AI fails. We analyze why the Safety Driver, Rafaela Vasquez, became the fall guy despite the car’s factory safety features (Volvo's AEB) being deliberately turned off.
3. Safety Theater: Looking good vs. being safe. We explore the culture. We discuss how Uber cut the second safety driver from the car to save money and "move fast," prioritizing the appearance of high-tech progress over the reality of road safety. We ask: are we testing these cars, or are they testing us?.
The full list of sources used to create this episode can be found on our Patreon under https://www.patreon.com/c/Morgrain