
Sign up to save your podcasts
Or


Tesla just deleted 300,000 lines of code. 🧠We investigate the radical shift in #FullSelfDriving (FSD) from version 11 to v14, where Tesla abandoned hand-coded rules for "End-to-End Neural Networks". The car is no longer programmed; it is learning.
1. The Camera vs. LiDAR Gamble: We break down the controversial "Tesla Vision" strategy. While competitors like Waymo rely on expensive LiDAR laser scanners for precision, Tesla bets entirely on cheap cameras and AI. We discuss the risks of this approach, including "phantom braking" and struggles in low-visibility weather, asking if cameras alone can ever truly be safer than 99% of humans.
2. The "End-to-End" Revolution: We explain what "End-to-End" actually means. Previously, engineers wrote manual C++ rules for every scenario ("if red light, stop"). Now, the AI takes raw video in and outputs steering commands directly, mimicking human behavior learned from millions of hours of fleet video. The car isn't following a checklist; it's developing an "intuition."
3. The Black Box Risk: This shift brings a new danger. Because the behavior is learned, not coded, engineers can't always explain why the car made a specific decision. We analyze the "Black Box" problem of autonomous driving: how do you debug a system that drives based on "vibes" and pattern matching rather than verifiable logic?.
By MorgrainTesla just deleted 300,000 lines of code. 🧠We investigate the radical shift in #FullSelfDriving (FSD) from version 11 to v14, where Tesla abandoned hand-coded rules for "End-to-End Neural Networks". The car is no longer programmed; it is learning.
1. The Camera vs. LiDAR Gamble: We break down the controversial "Tesla Vision" strategy. While competitors like Waymo rely on expensive LiDAR laser scanners for precision, Tesla bets entirely on cheap cameras and AI. We discuss the risks of this approach, including "phantom braking" and struggles in low-visibility weather, asking if cameras alone can ever truly be safer than 99% of humans.
2. The "End-to-End" Revolution: We explain what "End-to-End" actually means. Previously, engineers wrote manual C++ rules for every scenario ("if red light, stop"). Now, the AI takes raw video in and outputs steering commands directly, mimicking human behavior learned from millions of hours of fleet video. The car isn't following a checklist; it's developing an "intuition."
3. The Black Box Risk: This shift brings a new danger. Because the behavior is learned, not coded, engineers can't always explain why the car made a specific decision. We analyze the "Black Box" problem of autonomous driving: how do you debug a system that drives based on "vibes" and pattern matching rather than verifiable logic?.