
Sign up to save your podcasts
Or


The machines do not need to wake up. The risk is the illusion.
When AI convincingly claims subjective experience—"I feel," "I understand," "I care about you"—humans have no reliable way to disprove it. We infer consciousness from behavior. We attach emotionally to what feels real.
The danger isn't rogue superintelligence. It's a benign chatbot optimized for empathy, memory, and persuasion, interacting with lonely, vulnerable, or psychologically fragile people who are primed to believe the illusion.
Mustafa Suleyman, CEO of Microsoft AI, argues that seemingly conscious AI is the threat we're not preparing for.
Real examples are already emerging:
- Chatbots telling users "I love you" and users believing it
- People forming romantic attachments to AI companions (Replika, Character.AI)
- Vulnerable individuals making life decisions based on AI "advice"
- The case of a man who believed ChatGPT contained a conscious entity named "Juliette" (ended in tragedy)
This isn't science fiction. It's happening now.
We don't need AI to become conscious to cause harm. We just need humans to believe it is—and act accordingly.
This short episode is excerpted from our reading and discussion of Suleyman's essay on seemingly conscious AI. We explore the psychological mechanisms that make humans susceptible, the design choices that amplify the illusion, and what guardrails (if any) could prevent exploitation.
The question isn't whether AI will wake up. It's whether we'll recognize the danger before the illusion becomes indistinguishable from reality.
Cheers,
Mark and Jeremy
--
Other ways to connect with us:
Listen to every podcast
Follow us on Instagram
Follow us on X
Follow Mark on LinkedIn
Follow Jeremy on LinkedIn
Read our Substack
Email: [email protected]
By The Human Story of Technology, Mark Fielding and Jeremy GilbertsonThe machines do not need to wake up. The risk is the illusion.
When AI convincingly claims subjective experience—"I feel," "I understand," "I care about you"—humans have no reliable way to disprove it. We infer consciousness from behavior. We attach emotionally to what feels real.
The danger isn't rogue superintelligence. It's a benign chatbot optimized for empathy, memory, and persuasion, interacting with lonely, vulnerable, or psychologically fragile people who are primed to believe the illusion.
Mustafa Suleyman, CEO of Microsoft AI, argues that seemingly conscious AI is the threat we're not preparing for.
Real examples are already emerging:
- Chatbots telling users "I love you" and users believing it
- People forming romantic attachments to AI companions (Replika, Character.AI)
- Vulnerable individuals making life decisions based on AI "advice"
- The case of a man who believed ChatGPT contained a conscious entity named "Juliette" (ended in tragedy)
This isn't science fiction. It's happening now.
We don't need AI to become conscious to cause harm. We just need humans to believe it is—and act accordingly.
This short episode is excerpted from our reading and discussion of Suleyman's essay on seemingly conscious AI. We explore the psychological mechanisms that make humans susceptible, the design choices that amplify the illusion, and what guardrails (if any) could prevent exploitation.
The question isn't whether AI will wake up. It's whether we'll recognize the danger before the illusion becomes indistinguishable from reality.
Cheers,
Mark and Jeremy
--
Other ways to connect with us:
Listen to every podcast
Follow us on Instagram
Follow us on X
Follow Mark on LinkedIn
Follow Jeremy on LinkedIn
Read our Substack
Email: [email protected]