
Sign up to save your podcasts
Or
This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb
It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:
00:00 Introduction
09:14 The Big Questions and the new Priesthoods
18:40 Nick Bostrom and Superintelligence
25:10 If anyone builds it, everyone dies and Yudkowsky.
33:32 Prophecy, Inevitability, Induction and Bayesianism.
41:42 Popper, Kuhn, Feyerabend and Lakatos.
49:40 AI researchers ignore The Philosophy of Science.
58:46 A new test for AGI from Sam Altman and David Deutsch?
1:03:35 Accelerationists, Doomers and “Everyone dies”.
1:10:21 Conclusions
1:15:35 Audience Questions
4.8
9494 ratings
This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb
It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:
00:00 Introduction
09:14 The Big Questions and the new Priesthoods
18:40 Nick Bostrom and Superintelligence
25:10 If anyone builds it, everyone dies and Yudkowsky.
33:32 Prophecy, Inevitability, Induction and Bayesianism.
41:42 Popper, Kuhn, Feyerabend and Lakatos.
49:40 AI researchers ignore The Philosophy of Science.
58:46 A new test for AGI from Sam Altman and David Deutsch?
1:03:35 Accelerationists, Doomers and “Everyone dies”.
1:10:21 Conclusions
1:15:35 Audience Questions
2,673 Listeners
26,376 Listeners
1,087 Listeners
1,064 Listeners
1,439 Listeners
923 Listeners
2,137 Listeners
9,799 Listeners
1,163 Listeners
488 Listeners
30 Listeners
181 Listeners
160 Listeners
98 Listeners
510 Listeners