
Sign up to save your podcasts
Or


So here's a post I spent the past two months writing and rewriting. I abandoned this current draft after I found out that my thesis was empirically falsified three years ago by this paper, which provides strong evidence that transformers implement optimization algorithms internally. I'm putting this post up anyway as a cautionary tale about making clever arguments rather than doing empirical research. Oops.
1. Overview
The first time someone hears Eliezer Yudkowsky's argument that AI will probably kill everybody on Earth, it's not uncommon of to come away with a certain lingering confusion: what would actually motivate the AI to kill everybody in the first place? It can be quite counterintuitive in light of how friendly modern AIs like ChatGPT appear to be, and Yudkowsky's argument seems to have a bit of trouble changing people's gut feelings on this point.[1] It's possible this confusion is due to the [...]
---
Outline:
(00:33) 1. Overview
(05:28) 2. The details of the evolution analogy
(12:40) 3. Genes are friendly to loops of optimization, but weights are not
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongSo here's a post I spent the past two months writing and rewriting. I abandoned this current draft after I found out that my thesis was empirically falsified three years ago by this paper, which provides strong evidence that transformers implement optimization algorithms internally. I'm putting this post up anyway as a cautionary tale about making clever arguments rather than doing empirical research. Oops.
1. Overview
The first time someone hears Eliezer Yudkowsky's argument that AI will probably kill everybody on Earth, it's not uncommon of to come away with a certain lingering confusion: what would actually motivate the AI to kill everybody in the first place? It can be quite counterintuitive in light of how friendly modern AIs like ChatGPT appear to be, and Yudkowsky's argument seems to have a bit of trouble changing people's gut feelings on this point.[1] It's possible this confusion is due to the [...]
---
Outline:
(00:33) 1. Overview
(05:28) 2. The details of the evolution analogy
(12:40) 3. Genes are friendly to loops of optimization, but weights are not
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,318 Listeners

2,463 Listeners

8,594 Listeners

4,173 Listeners

97 Listeners

1,608 Listeners

10,019 Listeners

97 Listeners

525 Listeners

5,528 Listeners

16,001 Listeners

566 Listeners

133 Listeners

93 Listeners

471 Listeners