
Sign up to save your podcasts
Or


In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!
By Practical AI LLC4.4
189189 ratings
In the first episode of an "AI in the shadows" theme, Chris and Daniel explore the increasing concerning world of agentic misalignment. Starting out with a reminder about hallucinations and reasoning models, they break down how today’s models only mimic reasoning, which can lead to serious ethical considerations. They unpack a fascinating (and slightly terrifying) new study from Anthropic, where agentic AI models were caught simulating blackmail, deception, and even sabotage — all in the name of goal completion and self-preservation.
Featuring:
Links:
Register for upcoming webinars here!

289 Listeners

1,101 Listeners

169 Listeners

438 Listeners

300 Listeners

347 Listeners

312 Listeners

97 Listeners

138 Listeners

98 Listeners

227 Listeners

649 Listeners

105 Listeners

54 Listeners

34 Listeners