
Sign up to save your podcasts
Or


In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!
By Practical AI LLC4.4
185185 ratings
In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!

170 Listeners

302 Listeners

333 Listeners

305 Listeners

95 Listeners

110 Listeners

154 Listeners

227 Listeners

610 Listeners

274 Listeners

106 Listeners

54 Listeners

173 Listeners

35 Listeners

48 Listeners