
Sign up to save your podcasts
Or


This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb
It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:
00:00 Introduction
09:14 The Big Questions and the new Priesthoods
18:40 Nick Bostrom and Superintelligence
25:10 If anyone builds it, everyone dies and Yudkowsky.
33:32 Prophecy, Inevitability, Induction and Bayesianism.
41:42 Popper, Kuhn, Feyerabend and Lakatos.
49:40 AI researchers ignore The Philosophy of Science.
58:46 A new test for AGI from Sam Altman and David Deutsch?
1:03:35 Accelerationists, Doomers and “Everyone dies”.
1:10:21 Conclusions
1:15:35 Audience Questions
By Brett Hall4.8
9494 ratings
This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb
It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:
00:00 Introduction
09:14 The Big Questions and the new Priesthoods
18:40 Nick Bostrom and Superintelligence
25:10 If anyone builds it, everyone dies and Yudkowsky.
33:32 Prophecy, Inevitability, Induction and Bayesianism.
41:42 Popper, Kuhn, Feyerabend and Lakatos.
49:40 AI researchers ignore The Philosophy of Science.
58:46 A new test for AGI from Sam Altman and David Deutsch?
1:03:35 Accelerationists, Doomers and “Everyone dies”.
1:10:21 Conclusions
1:15:35 Audience Questions

2,677 Listeners

26,364 Listeners

1,095 Listeners

1,062 Listeners

1,436 Listeners

930 Listeners

2,133 Listeners

9,909 Listeners

1,175 Listeners

70 Listeners

34 Listeners

181 Listeners

165 Listeners

96 Listeners

521 Listeners