
Sign up to save your podcasts
Or


We explore AI Risk, the notion that computer intelligence will cross the human level, start compounding exponentially, and, if not trammelled, mulch everything we hold dear into paperclips. Why aren't Avi and Bugsby more worried about this?
References
Eliezer Yudkowsky
Public figures who've expressed concern: Elon Musk, Bill Gates, Hillary Clinton
Machine Intelligence Research Institute
OpenAI
Nick Bostrom - Superintelligence
Value alignment problem
Hippocratic Oath in software/civil engineering
Iron Law of Oligarchy
Superintelligence: the idea that eats smart people
Principal-agent problem
SSC AI risk persuasion experiment
EY AI box escape experiment
By Avi and Bugsby5
11 ratings
We explore AI Risk, the notion that computer intelligence will cross the human level, start compounding exponentially, and, if not trammelled, mulch everything we hold dear into paperclips. Why aren't Avi and Bugsby more worried about this?
References
Eliezer Yudkowsky
Public figures who've expressed concern: Elon Musk, Bill Gates, Hillary Clinton
Machine Intelligence Research Institute
OpenAI
Nick Bostrom - Superintelligence
Value alignment problem
Hippocratic Oath in software/civil engineering
Iron Law of Oligarchy
Superintelligence: the idea that eats smart people
Principal-agent problem
SSC AI risk persuasion experiment
EY AI box escape experiment