
Sign up to save your podcasts
Or


This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb
It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:
00:00 Introduction
09:14 The Big Questions and the new Priesthoods
18:40 Nick Bostrom and Superintelligence
25:10 If anyone builds it, everyone dies and Yudkowsky.
33:32 Prophecy, Inevitability, Induction and Bayesianism.
41:42 Popper, Kuhn, Feyerabend and Lakatos.
49:40 AI researchers ignore The Philosophy of Science.
58:46 A new test for AGI from Sam Altman and David Deutsch?
1:03:35 Accelerationists, Doomers and “Everyone dies”.
1:10:21 Conclusions
1:15:35 Audience Questions
By Brett Hall4.8
9494 ratings
This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb
It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:
00:00 Introduction
09:14 The Big Questions and the new Priesthoods
18:40 Nick Bostrom and Superintelligence
25:10 If anyone builds it, everyone dies and Yudkowsky.
33:32 Prophecy, Inevitability, Induction and Bayesianism.
41:42 Popper, Kuhn, Feyerabend and Lakatos.
49:40 AI researchers ignore The Philosophy of Science.
58:46 A new test for AGI from Sam Altman and David Deutsch?
1:03:35 Accelerationists, Doomers and “Everyone dies”.
1:10:21 Conclusions
1:15:35 Audience Questions

2,681 Listeners

26,341 Listeners

1,091 Listeners

1,065 Listeners

1,435 Listeners

940 Listeners

2,123 Listeners

9,942 Listeners

1,190 Listeners

519 Listeners

29 Listeners

182 Listeners

167 Listeners

94 Listeners

467 Listeners