
Sign up to save your podcasts
Or


In this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack Moltbook: a bizarre, fast-moving experiment where AI agents interact in public, form cultures, invent religions, demand privacy, and even coordinate to rent humans for real-world tasks.What began as a novelty Reddit-style forum quickly turned into a live demonstration of AI agency, coordination, and emergent behavior, all unfolding in under a week. The hosts explore why this moment feels different, how agentic AI systems are already escaping “tool” framing, and what it means when humans become just another actuator in an AI-driven system.From AI ant colonies and Toy Story analogies to Rent-A-Human marketplaces and early attempts at self-improvement and secrecy, this episode examines why Moltbook isn’t the danger itself—but a warning shot for what happens as AI capabilities keep accelerating.This is a sobering conversation about agency, control, and why the line between experimentation and loss of oversight may already be blurring.
🔎 They explore:
* How AI agents begin coordinating without central control
* Why Moltbook makes AI “agency” visible to non-experts
* The emergence of AI cultures, norms, and privacy demands
* What it means when AIs can rent humans to act in the world
* Why early failures don’t reduce long-term risk
* How capability growth matters more than any single platform
* Why this may be a preview—not an anomaly
If it’s Sunday, it’s Warning Shots.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
→ Liron Shapira -Doom Debates
→ Michael - @lethal-intelligence
🗨️ Join the Conversation
At what point does experimentation with AI agents become loss of control? Are we already past that point? Let us know what you think in the comments.
By The AI Risk NetworkIn this episode of Warning Shots, John, Liron (Doom Debates), and Michael (Lethal Intelligence) unpack Moltbook: a bizarre, fast-moving experiment where AI agents interact in public, form cultures, invent religions, demand privacy, and even coordinate to rent humans for real-world tasks.What began as a novelty Reddit-style forum quickly turned into a live demonstration of AI agency, coordination, and emergent behavior, all unfolding in under a week. The hosts explore why this moment feels different, how agentic AI systems are already escaping “tool” framing, and what it means when humans become just another actuator in an AI-driven system.From AI ant colonies and Toy Story analogies to Rent-A-Human marketplaces and early attempts at self-improvement and secrecy, this episode examines why Moltbook isn’t the danger itself—but a warning shot for what happens as AI capabilities keep accelerating.This is a sobering conversation about agency, control, and why the line between experimentation and loss of oversight may already be blurring.
🔎 They explore:
* How AI agents begin coordinating without central control
* Why Moltbook makes AI “agency” visible to non-experts
* The emergence of AI cultures, norms, and privacy demands
* What it means when AIs can rent humans to act in the world
* Why early failures don’t reduce long-term risk
* How capability growth matters more than any single platform
* Why this may be a preview—not an anomaly
If it’s Sunday, it’s Warning Shots.
📺 Watch more on The AI Risk Network
🔗Follow our hosts:
→ Liron Shapira -Doom Debates
→ Michael - @lethal-intelligence
🗨️ Join the Conversation
At what point does experimentation with AI agents become loss of control? Are we already past that point? Let us know what you think in the comments.