Fast Forward: the Post-Pandemic Innovation Podcast

EP 5: 23 Guidelines for Avoiding an AI Apocalypse


Listen Later

Last month, we did a workshop on the Asilomar 23 AI Principals -- a list of 23 guidelines created by some of the most respected leaders and researchers in AI, at the Beneficial AI 2017 Conference, that outlined how AI researchers, scientists, lawmakers and tech companies should develop AI to ensure safe, ethical and beneficial use of AI. In short, guidelines to keep us from creating AI/robots that will destroy humanity.

Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, our workshop guest speaker, explored the intent, meaning, and implications of these AI guidelines, and whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe.

Ed and Charlie explore this topic more and discuss what happens when ordinary, awesome people (the workshop attendees) create their own guidelines on how NOT to destroy the planet with AI.

RESOURCES MENTIONED:

  • Workshop Event Page
  • 23 Asilomar Principals
  • Seth’s Workshop Presentation
  • Workshop Group Exercise
  • Photos from the Workshop
  • Moralities of Everyday Life (by Coursera)
  • Ethical Social Media (by Coursera)
  • Intro to Artificial Intelligence (by Udacity)
  • Become a Robot Programmer in Only 87 Minutes (by Universal Robots)
  • Compelling Science Fiction (scifi website)
  • Imagining Elon Musk's Million-Person Mars Colony

 

FOLLOW AND REACH OUT TO US AT:

  • Website: tech2025.com/podcast
  • Twitter: @JoinTech2025
  • Email: [email protected]
  • Charlie Oliver: Twitter - @itscomplicated / LinkedIn: linkedin.com/in/charlieoliverny
  • Ed Maguire: Twitter - @eemaguire / LinkedIn: linkedin.com/in/emaguire/

 

...more
View all episodesView all episodes
Download on the App Store

Fast Forward: the Post-Pandemic Innovation PodcastBy Tech 2025