
Sign up to save your podcasts
Or


In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!
By Practical AI LLC4.4
189189 ratings
In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!

288 Listeners

1,105 Listeners

166 Listeners

443 Listeners

306 Listeners

343 Listeners

313 Listeners

101 Listeners

150 Listeners

101 Listeners

228 Listeners

688 Listeners

112 Listeners

54 Listeners

34 Listeners