
Sign up to save your podcasts
Or


Crosspost from my blog.
[Epistemic status: !! 🚨 Drama Alert 🚨 !! discoursepoasting, LWslop]
Case 1: You only get six words
In 2024, the MATS team published a post, originally titled "Talent Needs in Technical AI Safety".
I, a hero, made this comment and elaborated in the ensuing comment thread. The content isn't so important here—basically, I was objecting to a certain framing in the post, which tied into a general issue I had with the broader landscape of people nominally working on decreasing AGI X-risk.
Now, I have not actually read this post. (I kinda skimmed it and read parts.) So I don't actually know what's in it. The post's description of itself, from the introduction:
In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions.
Well, that sounds like a lot of work, and I believe that they definitely did write about several ideas coming from that work. My comment was not about any [...]
---
Outline:
(00:25) Case 1: You only get six words
(02:45) Case 2: Trees may be cool but how should concepts work in general??
(05:38) Case 3: The Bannination
(07:44) The Pattern
(09:12) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
By LessWrongCrosspost from my blog.
[Epistemic status: !! 🚨 Drama Alert 🚨 !! discoursepoasting, LWslop]
Case 1: You only get six words
In 2024, the MATS team published a post, originally titled "Talent Needs in Technical AI Safety".
I, a hero, made this comment and elaborated in the ensuing comment thread. The content isn't so important here—basically, I was objecting to a certain framing in the post, which tied into a general issue I had with the broader landscape of people nominally working on decreasing AGI X-risk.
Now, I have not actually read this post. (I kinda skimmed it and read parts.) So I don't actually know what's in it. The post's description of itself, from the introduction:
In the winter and spring of 2024, we conducted 31 interviews, ranging in length from 30 to 120 minutes, with key figures in AI safety, including senior researchers, organization leaders, social scientists, strategists, funders, and policy experts. This report synthesizes the key insights from these discussions.
Well, that sounds like a lot of work, and I believe that they definitely did write about several ideas coming from that work. My comment was not about any [...]
---
Outline:
(00:25) Case 1: You only get six words
(02:45) Case 2: Trees may be cool but how should concepts work in general??
(05:38) Case 3: The Bannination
(07:44) The Pattern
(09:12) Conclusion
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

26,319 Listeners

2,452 Listeners

8,521 Listeners

4,175 Listeners

93 Listeners

1,602 Listeners

9,938 Listeners

96 Listeners

517 Listeners

5,509 Listeners

15,892 Listeners

553 Listeners

131 Listeners

93 Listeners

465 Listeners