
Sign up to save your podcasts
Or


Techno-philosopher Eliezer Yudkowsky recently went on Ezra Klein's podcast to argue that if we continue on our path toward superintelligent AI, these machines will destroy humanity. In this episode, Cal responds to Yudkowsky’s argument point by point, concluding with a more general claim that these general styles of discussions suffer from what he calls “the philosopher’s fallacy,” and are distracting us from real problems with AI that are actually afflicting us right now. He then answers listener questions about AI, responds to listener comments from an earlier AI episode, and ends by discussing Alpha schools, which claim to use AI to 2x the speed of education.
Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: bit.ly/3U3sTvo
Video from today’s episode: youtube.com/calnewportmedia
Deep Dive: The Case Against Superintelligence [0.01]
COMMENTS: Cal reads LM comments [1:16:58]
CALL: Clarification on Lincoln Protocol [1:21:36]
CAL REACTS: Are AI-Powered Schools the Future? [1:24:46]
Links:
Buy Cal’s latest book, “Slow Productivity” at calnewport.com/slow
Get a signed copy of Cal’s “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/
Cal’s monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?
youtube.com/watch?v=2Nn0-kAE5c0
alpha.school/the-program/
astralcodexten.com/i/166959786/part-three-how-alpha-works-part
Thanks to our Sponsors:
byloftie.com (Use code “DEEP20”)
expressvpn.com/deep
shopify.com/deep
vanta.com/deepquestions
Thanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
By Cal Newport4.8
12691,269 ratings
Techno-philosopher Eliezer Yudkowsky recently went on Ezra Klein's podcast to argue that if we continue on our path toward superintelligent AI, these machines will destroy humanity. In this episode, Cal responds to Yudkowsky’s argument point by point, concluding with a more general claim that these general styles of discussions suffer from what he calls “the philosopher’s fallacy,” and are distracting us from real problems with AI that are actually afflicting us right now. He then answers listener questions about AI, responds to listener comments from an earlier AI episode, and ends by discussing Alpha schools, which claim to use AI to 2x the speed of education.
Below are the questions covered in today's episode (with their timestamps). Get your questions answered by Cal! Here’s the link: bit.ly/3U3sTvo
Video from today’s episode: youtube.com/calnewportmedia
Deep Dive: The Case Against Superintelligence [0.01]
COMMENTS: Cal reads LM comments [1:16:58]
CALL: Clarification on Lincoln Protocol [1:21:36]
CAL REACTS: Are AI-Powered Schools the Future? [1:24:46]
Links:
Buy Cal’s latest book, “Slow Productivity” at calnewport.com/slow
Get a signed copy of Cal’s “Slow Productivity” at peoplesbooktakoma.com/event/cal-newport/
Cal’s monthly book directory: bramses.notion.site/059db2641def4a88988b4d2cee4657ba?
youtube.com/watch?v=2Nn0-kAE5c0
alpha.school/the-program/
astralcodexten.com/i/166959786/part-three-how-alpha-works-part
Thanks to our Sponsors:
byloftie.com (Use code “DEEP20”)
expressvpn.com/deep
shopify.com/deep
vanta.com/deepquestions
Thanks to Jesse Miller for production, Jay Kerstens for the intro music, and Mark Miles for mastering.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

458 Listeners

2,673 Listeners

841 Listeners

11,901 Listeners

503 Listeners

12,745 Listeners

1,657 Listeners

8,893 Listeners

4,894 Listeners

363 Listeners

936 Listeners

29,206 Listeners

624 Listeners

1,031 Listeners

79 Listeners