AiShed

S1E3 Agentic AI: Beyond Explainability


Listen Later

In this episode, Alessandro and Giovanni explore why explainability is no longer enough for modern AI and agentic systems—and why observability must take center stage. They explain that explainability is retrospective (“why the system thinks it acted”), whereas observability provides real-time insight into what the system is doing, whether it remains within its safety envelope, and how far it is from violating constraints.

Drawing parallels with fly-by-wire and safety-critical software, they show that autonomy increases—not reduces—the need for instrumentation, logging, monitoring, and traceable reasoning. The conversation emphasizes that trustworthy agentic AI requires continuous telemetry, drift detection, guardrail activation logs, and visibility into planning and sub-goal generation.

To illustrate the risks of missing explainability, they recount a striking real-world example: a Boeing 737 automated braking system that behaved “perfectly” according to its internal logic but offered no cues to pilots. This opacity led to confusing and unsafe events until engineers added simple cockpit messages. The system didn’t need new logic; it needed to communicate.

The core message of the episode is clear:
 Autonomous systems cannot be trusted unless their behaviour is continuously observable. Explainability is helpful, but observability is essential for safety, certification, and human-machine collaboration.

...more
View all episodesView all episodes
Download on the App Store

AiShedBy Alex