
Sign up to save your podcasts
Or
Last month, we did a workshop on the Asilomar 23 AI Principals -- a list of 23 guidelines created by some of the most respected leaders and researchers in AI, at the Beneficial AI 2017 Conference, that outlined how AI researchers, scientists, lawmakers and tech companies should develop AI to ensure safe, ethical and beneficial use of AI. In short, guidelines to keep us from creating AI/robots that will destroy humanity.
Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, our workshop guest speaker, explored the intent, meaning, and implications of these AI guidelines, and whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe.
Ed and Charlie explore this topic more and discuss what happens when ordinary, awesome people (the workshop attendees) create their own guidelines on how NOT to destroy the planet with AI.
RESOURCES MENTIONED:
FOLLOW AND REACH OUT TO US AT:
Last month, we did a workshop on the Asilomar 23 AI Principals -- a list of 23 guidelines created by some of the most respected leaders and researchers in AI, at the Beneficial AI 2017 Conference, that outlined how AI researchers, scientists, lawmakers and tech companies should develop AI to ensure safe, ethical and beneficial use of AI. In short, guidelines to keep us from creating AI/robots that will destroy humanity.
Dr. Seth Baum, Executive Director of the Global Catastrophic Risk Institute, our workshop guest speaker, explored the intent, meaning, and implications of these AI guidelines, and whether having a set of existing guidelines can really safeguard humanity from AI-catastrophe.
Ed and Charlie explore this topic more and discuss what happens when ordinary, awesome people (the workshop attendees) create their own guidelines on how NOT to destroy the planet with AI.
RESOURCES MENTIONED:
FOLLOW AND REACH OUT TO US AT: