
Sign up to save your podcasts
Or


As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems.
Featuring:
Upcoming Events:
By Practical AI LLC4.4
189189 ratings
As generative AI moves into production, traditional guardrails and input/output filters can prove too slow, too expensive, and/or too limited. In this episode, Alizishaan Khatri of Wrynx joins Daniel and Chris to explore a fundamentally different approach to AI safety and interpretability. They unpack the limits of today’s black-box defenses, the role of interpretability, and how model-native, runtime signals can enable safer AI systems.
Featuring:
Upcoming Events:

289 Listeners

1,099 Listeners

169 Listeners

437 Listeners

300 Listeners

347 Listeners

311 Listeners

97 Listeners

139 Listeners

98 Listeners

227 Listeners

648 Listeners

104 Listeners

54 Listeners

34 Listeners