AiShed

S1E2 AI and Safety-Critical Systems: Lessons from Therac-25


Listen Later

In this episode of Agentic AI, Alessandro and Giovanni explore why robust systems engineering remains essential in the age of machine learning and agentic AI. Through a technical yet conversational exchange, they revisit the Therac-25 accident as a cautionary tale demonstrating how inadequate hazard analysis, missing interlocks, and reused software can lead to catastrophic failures. The discussion highlights that similar risks now arise with modern autonomous and learning-based systems.

They explain that the biggest change introduced by machine learning is not the algorithms themselves, but the entire data management process, dataset governance, and the impact of stochastic behaviour on certification. Current aerospace and safety-critical standards cannot yet certify learning systems because they adapt without explicit programming, making traceability and deterministic verification difficult.

The episode connects this to broader system integrity concerns: command-and-control links must be highly reliable, secure, and resistant to jamming or spoofing. Drawing parallels with fly-by-wire systems, Giovanni stresses that communication integrity, safety-risk mitigation, and a solid architectural “required base” are mandatory for high-complexity system development.

Alessandro and Giovanni conclude that agentic AI—systems capable of autonomous reasoning and decision-making—must be engineered within strict constraints: robust data governance, validation loops, safety monitors, and lifecycle discipline. Autonomy does not replace systems engineering; it makes it indispensable. The key message: without structure, autonomy becomes instability, and past failures like Therac-25 must guide the design of tomorrow’s intelligent systems.

...more
View all episodesView all episodes
Download on the App Store

AiShedBy Alex