
Sign up to save your podcasts
Or


This week I discuss the real possibility that AI Agents could “turn evil” citing the example of Scott Shambaugh, a software engineer, who was attacked by an OpenClaw agent called MJ Rathbun.
Just so you understand: Scott is a real person but MJ Rathbun is an AI. Yet MJ took upon itself to openly attack Scott online, presumably for not accepting his open source code.
It’s a fascinating (ongoing) story, but it begs a bigger question: when a “human” (or agent) has total authority and power, does it then become evil? We saw this in the famous Stanford Prisoner Experiment, at Abu Ghraib, and in other examples I cite. Does this strange human nature translate directly into AI?
As you’ll hear, these new findings deliver many lessons for our corporate AI systems, and I explain how the issues of AI training, governance, and ethics become real. And this brings up the issue of AI regulation, legal accountability, and who is responsible for these behaviors.
Much of this is being played out in real time in the US War Department vs. Anthropic happening in the press as well.
Despite my most optimistic opinions about AI, “Power Corrupts” may be a statement that applies to AI just as it does to humans.
As AI becomes more embedded in enterprise decision-making, thoughtful governance, ethical design, and continuous monitoring become urgent.
Additional Resources
An AI Agent Published A Hit Piece on Me
The Rise of the Bratty Machines (NYT)
When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled (WSJ)
BBC Finds That 45% of AI Queries Produce Erroneous Answers
Anthropic CEO says he’s sticking to AI “red lines” despite clash with Pentagon
OpenAI Steps Into The Breach in US War Department
By Josh Bersin4.5
5050 ratings
This week I discuss the real possibility that AI Agents could “turn evil” citing the example of Scott Shambaugh, a software engineer, who was attacked by an OpenClaw agent called MJ Rathbun.
Just so you understand: Scott is a real person but MJ Rathbun is an AI. Yet MJ took upon itself to openly attack Scott online, presumably for not accepting his open source code.
It’s a fascinating (ongoing) story, but it begs a bigger question: when a “human” (or agent) has total authority and power, does it then become evil? We saw this in the famous Stanford Prisoner Experiment, at Abu Ghraib, and in other examples I cite. Does this strange human nature translate directly into AI?
As you’ll hear, these new findings deliver many lessons for our corporate AI systems, and I explain how the issues of AI training, governance, and ethics become real. And this brings up the issue of AI regulation, legal accountability, and who is responsible for these behaviors.
Much of this is being played out in real time in the US War Department vs. Anthropic happening in the press as well.
Despite my most optimistic opinions about AI, “Power Corrupts” may be a statement that applies to AI just as it does to humans.
As AI becomes more embedded in enterprise decision-making, thoughtful governance, ethical design, and continuous monitoring become urgent.
Additional Resources
An AI Agent Published A Hit Piece on Me
The Rise of the Bratty Machines (NYT)
When AI Bots Start Bullying Humans, Even Silicon Valley Gets Rattled (WSJ)
BBC Finds That 45% of AI Queries Produce Erroneous Answers
Anthropic CEO says he’s sticking to AI “red lines” despite clash with Pentagon
OpenAI Steps Into The Breach in US War Department

395 Listeners

168 Listeners

111 Listeners

614 Listeners

3,984 Listeners

129 Listeners

9,116 Listeners

105 Listeners

176 Listeners

80 Listeners

670 Listeners

297 Listeners

220 Listeners

171 Listeners

20 Listeners