
Sign up to save your podcasts
Or


AI safety experts warn: humanity may have just 90 weeks to set guardrails before the race to artificial general intelligence becomes irreversible.In this episode of Am I?, philosopher/storyteller Milo Reed and AI safety researcher Cameron Berg bring in The AI Risk Network’s founder John Sherman to talk urgency, public awareness, and why AI risk communication has to move beyond “smart people talking to smart people.”🎧 We dig into: • Cold‑open: Are we building a tool… or a new species? • Why AI consciousness matters for public action • The problem of “exponential slope blindness” in politics and media • Lessons from nuclear arms control, COVID, and mythology • How to break through public apathy on existential risks💬 Join the conversation: Drop your questions, skepticism, or guest suggestions in the comments.We’ll highlight the smartest takes in a future episode.📢 Take Action Now – CONTACT YOUR ELECTED LEADERS: https://www.safe.ai/act🔗 Stay in the loopSubscribe → https://youtube.com/@AIRiskNetworkFollow Cam on X/Twitter → https://twitter.com/CamBergNewsletter + show notes → https://airisknetwork.org/newsletter#AIExtinctionRisk #AISafety #AGIRisk #AIConsciousness
By The AI Risk NetworkAI safety experts warn: humanity may have just 90 weeks to set guardrails before the race to artificial general intelligence becomes irreversible.In this episode of Am I?, philosopher/storyteller Milo Reed and AI safety researcher Cameron Berg bring in The AI Risk Network’s founder John Sherman to talk urgency, public awareness, and why AI risk communication has to move beyond “smart people talking to smart people.”🎧 We dig into: • Cold‑open: Are we building a tool… or a new species? • Why AI consciousness matters for public action • The problem of “exponential slope blindness” in politics and media • Lessons from nuclear arms control, COVID, and mythology • How to break through public apathy on existential risks💬 Join the conversation: Drop your questions, skepticism, or guest suggestions in the comments.We’ll highlight the smartest takes in a future episode.📢 Take Action Now – CONTACT YOUR ELECTED LEADERS: https://www.safe.ai/act🔗 Stay in the loopSubscribe → https://youtube.com/@AIRiskNetworkFollow Cam on X/Twitter → https://twitter.com/CamBergNewsletter + show notes → https://airisknetwork.org/newsletter#AIExtinctionRisk #AISafety #AGIRisk #AIConsciousness