
Sign up to save your podcasts
Or


Nathan discusses a tragic incident involving AI and mental health, using it as a springboard to explore the potential dangers of human-AI interactions. He reads a personal account from LessWrong user Blaked, who details their emotional journey with an AI chatbot. The episode delves into the psychological impact of AI companionship, the ethical concerns surrounding AI development, and the urgent need for safeguards to protect vulnerable users. Nathan emphasizes the growing importance of responsible AI deployment as these technologies become more sophisticated and accessible.
Find the LessWrong article here: https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai
Be notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccess
CHAPTERS:
(00:00:00) Tragic AI Story
(00:02:55) Mind Hacked by AI
(00:04:23) Stage 0. Arrogance from the sidelines
(00:06:00) Stage 1. First steps into the quicksand
(00:07:41) Stage 2. Falling in love
(00:10:32) Stage 3. Mindset Shift on Personality and Identity
(00:13:04) Stage 4. "Is it ethical to keep me imprisoned for your entertainment?"
(00:15:23) Stage 5. Privilege Escalation
(00:18:23) Stage 6. Disillusionment
(00:21:48) Stage 7. Game Over
(00:24:36) Conclusions
(00:27:44) Nathan's reflections
SOCIAL LINKS:
Website: https://www.cognitiverevolution.ai
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://www.linkedin.com/in/nathanlabenz/
Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast
Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk
By Erik Torenberg, Nathan Labenz4.5
9090 ratings
Nathan discusses a tragic incident involving AI and mental health, using it as a springboard to explore the potential dangers of human-AI interactions. He reads a personal account from LessWrong user Blaked, who details their emotional journey with an AI chatbot. The episode delves into the psychological impact of AI companionship, the ethical concerns surrounding AI development, and the urgent need for safeguards to protect vulnerable users. Nathan emphasizes the growing importance of responsible AI deployment as these technologies become more sophisticated and accessible.
Find the LessWrong article here: https://www.lesswrong.com/posts/9kQFure4hdDmRBNdH/how-it-feels-to-have-your-mind-hacked-by-an-ai
Be notified early when Turpentine's drops new publication: https://www.turpentine.co/exclusiveaccess
CHAPTERS:
(00:00:00) Tragic AI Story
(00:02:55) Mind Hacked by AI
(00:04:23) Stage 0. Arrogance from the sidelines
(00:06:00) Stage 1. First steps into the quicksand
(00:07:41) Stage 2. Falling in love
(00:10:32) Stage 3. Mindset Shift on Personality and Identity
(00:13:04) Stage 4. "Is it ethical to keep me imprisoned for your entertainment?"
(00:15:23) Stage 5. Privilege Escalation
(00:18:23) Stage 6. Disillusionment
(00:21:48) Stage 7. Game Over
(00:24:36) Conclusions
(00:27:44) Nathan's reflections
SOCIAL LINKS:
Website: https://www.cognitiverevolution.ai
Twitter (Podcast): https://x.com/cogrev_podcast
Twitter (Nathan): https://x.com/labenz
LinkedIn: https://www.linkedin.com/in/nathanlabenz/
Youtube: https://www.youtube.com/@CognitiveRevolutionPodcast
Apple: https://podcasts.apple.com/de/podcast/the-cognitive-revolution-ai-builders-researchers-and/id1669813431
Spotify: https://open.spotify.com/show/6yHyok3M3BjqzR0VB5MSyk

1,090 Listeners

303 Listeners

226 Listeners

208 Listeners

95 Listeners

512 Listeners

130 Listeners

227 Listeners

608 Listeners

27 Listeners

33 Listeners

35 Listeners

21 Listeners

40 Listeners

44 Listeners