
Sign up to save your podcasts
Or


So here's a post I spent the past two months writing and rewriting. I abandoned this current draft after I found out that my thesis was empirically falsified three years ago by this paper, which provides strong evidence that transformers implement optimization algorithms internally. I'm putting this post up anyway as a cautionary tale about making clever arguments rather than doing empirical research. Oops.
1. Overview
The first time someone hears Eliezer Yudkowsky's argument that AI will probably kill everybody on Earth, it's not uncommon of to come away with a certain lingering confusion: what would actually motivate the AI to kill everybody in the first place? It can be quite counterintuitive in light of how friendly modern AIs like ChatGPT appear to be, and Yudkowsky's argument seems to have a bit of trouble changing people's gut feelings on this point.[1] It's possible this confusion is due to the [...]
---
Outline:
(00:33) 1. Overview
(05:28) 2. The details of the evolution analogy
(12:40) 3. Genes are friendly to loops of optimization, but weights are not
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongSo here's a post I spent the past two months writing and rewriting. I abandoned this current draft after I found out that my thesis was empirically falsified three years ago by this paper, which provides strong evidence that transformers implement optimization algorithms internally. I'm putting this post up anyway as a cautionary tale about making clever arguments rather than doing empirical research. Oops.
1. Overview
The first time someone hears Eliezer Yudkowsky's argument that AI will probably kill everybody on Earth, it's not uncommon of to come away with a certain lingering confusion: what would actually motivate the AI to kill everybody in the first place? It can be quite counterintuitive in light of how friendly modern AIs like ChatGPT appear to be, and Yudkowsky's argument seems to have a bit of trouble changing people's gut feelings on this point.[1] It's possible this confusion is due to the [...]
---
Outline:
(00:33) 1. Overview
(05:28) 2. The details of the evolution analogy
(12:40) 3. Genes are friendly to loops of optimization, but weights are not
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,365 Listeners

2,443 Listeners

9,083 Listeners

4,156 Listeners

92 Listeners

1,595 Listeners

9,907 Listeners

90 Listeners

507 Listeners

5,468 Listeners

16,056 Listeners

540 Listeners

132 Listeners

95 Listeners

521 Listeners