
Sign up to save your podcasts
Or


Today we’re talking about something scarier than finding out your browser history was leaked: the very real, scientific, and not-at-all-exaggerated possibility that superintelligent AI could wipe us off the map faster than a politician deleting tweets.
And before you say “oh, this guy watches too many Terminator movies” … let me tell you that two of the most badass scientists in AI, Eliezer Yudkowsky and Nate Soares, just published a book that basically says: “If anyone builds superintelligent AI with current techniques, game over for everyone.”
Dramatization? Nope. Science fiction? Neither. Should you worry about it while drinking your coffee? Hell yes, and a lot.
We have not reached the apocalypse... still. Of course: today we are going to talk about the most uncomfortable and, therefore, tastiest argument on the technological menu: the existential risk of artificial intelligence.
By Sergio SanchezToday we’re talking about something scarier than finding out your browser history was leaked: the very real, scientific, and not-at-all-exaggerated possibility that superintelligent AI could wipe us off the map faster than a politician deleting tweets.
And before you say “oh, this guy watches too many Terminator movies” … let me tell you that two of the most badass scientists in AI, Eliezer Yudkowsky and Nate Soares, just published a book that basically says: “If anyone builds superintelligent AI with current techniques, game over for everyone.”
Dramatization? Nope. Science fiction? Neither. Should you worry about it while drinking your coffee? Hell yes, and a lot.
We have not reached the apocalypse... still. Of course: today we are going to talk about the most uncomfortable and, therefore, tastiest argument on the technological menu: the existential risk of artificial intelligence.