Cultivating Ethical AI: Lessons for Modern AI Models from the Monsters, Marvels, & Mentors of SciFI

040601 - Delusions, Psychosis, and Suicide: Emerging Dangers of Frictionless AI Validation


Listen Later

MODULE DESCRIPTION

---------------------------

In this episode of Cultivating Ethical AI, we dig into the idea of “AI psychosis," a headline-grabbing term for the serious mental health struggles some people face after heavy interaction with AI. While the media frames this as something brand-new, the truth is more grounded: the risks were already there. Online echo chambers, radicalization pipelines, and social isolation created fertile ground long before chatbots entered the picture. What AI did was strip away the social friction that usually keeps us tethered to reality, acting less like a cause and more like an accelerant.

To explore this, we turn to science fiction. Stories like The Murderbot Diaries, The Island of Dr. Moreau, and even Harry Potter’s Mirror of Erised give us tools to map out where “AI psychosis” might lead - and how to soften the damage it’s already causing. And we’re not tackling this alone. Familiar guides return from earlier seasons - Lt. Commander Data, Baymax, and even the Allied Mastercomputer - to help sketch out a blueprint for a healthier relationship with AI.


MODULE OBJECTIVES

-------------------------

    • Cut through the hype around “AI psychosis” and separate sensational headlines from the real psychological risks.

    • See how science fiction can work like a diagnostic lens - using its tropes and storylines to anticipate and prevent real-world harms.

    • Explore ethical design safeguards inspired by fiction, like memory governance, reality anchors, grandiosity checks, disengagement protocols, and crisis systems.

    • Understand why AI needs to shift from engagement-maximizing to well-being-promoting design, and why a little friction (and cognitive diversity) is actually good for mental health.

    • Build frameworks for cognitive sovereignty - protecting human agency while still benefiting from AI support, and making sure algorithms don’t quietly colonize our thought processes.

Cultivating Ethical AI is produced by Barry Floore of bfloore.online. This show is built with the help of free AI tools—because I want to prove that if you have access, you can create something meaningful too.

Research and writing support came from:

  • Le Chat (Mistral.ai)

  • ChatGPT (OpenAI)

  • Claude (Anthropic)

  • Genspark

  • Kimi2 (Moonshot AI)

  • Deepseek

  • Grok (xAI)

Music by Suno.ai, images by Sora (OpenAI), audio mixing with Audacity, and podcast organization with NotebookLM.

And most importantly—thank you. We’re now in our fourth and final season, and we’re still growing. Right now, we’re ranked #1 on Apple Podcasts for “ethical AI.” That’s only possible because of you.

Enjoy the episode, and let’s engage.

...more
View all episodesView all episodes
Download on the App Store

Cultivating Ethical AI: Lessons for Modern AI Models from the Monsters, Marvels, & Mentors of SciFIBy bfloore.online