
Sign up to save your podcasts
Or


[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrong[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,383 Listeners

2,423 Listeners

8,233 Listeners

4,147 Listeners

92 Listeners

1,561 Listeners

9,829 Listeners

89 Listeners

489 Listeners

5,479 Listeners

16,097 Listeners

532 Listeners

133 Listeners

97 Listeners

509 Listeners