
Sign up to save your podcasts
Or


Worldbuilding is critical for understanding the world and how the future could go - but it's also useful for understanding counterfactuals better. With that in mind, when people talk about counterfactuals in AI development, they seem to assume that safety would always have been a focus. That is, there's a thread of thought that blames Yudkowsky and/or Effective Altruists for bootstrapping AI development; 1, 2, 3. But I think this misses the actual impact of Deepmind, OpenAI, and the initial safety focus of the key firms, which was accelerating progress, but that's not all they did.
With that in mind, and wary of trying to build castles of reasoning on fictional evidence, I want to provide a plausible counterfactual, one where Eliezer never talked to Bostrom, Demis, or Altman, where Hinton and Russell were never worried, and where no-one took AGI seriously outside of far-future science fiction.
Counterfactual: A [...]
---
Outline:
(01:04) Counterfactual: A Quiet AGI Timeline
(02:04) Pre-2020: APIs Without Press Releases
(03:29) 2021: Language Parroting Systems
(05:15) 2023: The Two Markets
(07:15) 2025: First Bad Fridays
(11:17) 2026: Regulation by Anecdote Meets Scaling
(15:38) 2027: The Plateau That Isn't
(17:20) 2028: The Future
(17:41) Learning from Fictional Evidence?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrongWorldbuilding is critical for understanding the world and how the future could go - but it's also useful for understanding counterfactuals better. With that in mind, when people talk about counterfactuals in AI development, they seem to assume that safety would always have been a focus. That is, there's a thread of thought that blames Yudkowsky and/or Effective Altruists for bootstrapping AI development; 1, 2, 3. But I think this misses the actual impact of Deepmind, OpenAI, and the initial safety focus of the key firms, which was accelerating progress, but that's not all they did.
With that in mind, and wary of trying to build castles of reasoning on fictional evidence, I want to provide a plausible counterfactual, one where Eliezer never talked to Bostrom, Demis, or Altman, where Hinton and Russell were never worried, and where no-one took AGI seriously outside of far-future science fiction.
Counterfactual: A [...]
---
Outline:
(01:04) Counterfactual: A Quiet AGI Timeline
(02:04) Pre-2020: APIs Without Press Releases
(03:29) 2021: Language Parroting Systems
(05:15) 2023: The Two Markets
(07:15) 2025: First Bad Fridays
(11:17) 2026: Regulation by Anecdote Meets Scaling
(15:38) 2027: The Plateau That Isn't
(17:20) 2028: The Future
(17:41) Learning from Fictional Evidence?
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,377 Listeners

2,430 Listeners

8,187 Listeners

4,151 Listeners

92 Listeners

1,556 Listeners

9,799 Listeners

89 Listeners

489 Listeners

5,468 Listeners

16,152 Listeners

531 Listeners

131 Listeners

97 Listeners

510 Listeners