
Sign up to save your podcasts
Or


In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!
By Practical AI LLC4.4
189189 ratings
In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!

288 Listeners

1,106 Listeners

168 Listeners

442 Listeners

309 Listeners

345 Listeners

313 Listeners

100 Listeners

144 Listeners

103 Listeners

228 Listeners

681 Listeners

113 Listeners

53 Listeners

34 Listeners