
Sign up to save your podcasts
Or


[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By LessWrong[Co-written by Mateusz Bagiński and Samuel Buteau (Ishual)]
TL;DR
Many X-risk-concerned people who join AI capabilities labs with the intent to contribute to existential safety think that the labs are currently engaging in a race that is unacceptably likely to lead to human disempowerment and/or extinction, and would prefer an AGI ban[1] over the current path. This post makes the case that such people should speak out publicly[2] against the current AI R&D regime and in favor of an AGI ban[3]. They should explicitly communicate that a saner world would coordinate not to build existentially dangerous intelligences, at least until we know how to do it in a principled, safe way. They could choose to maintain their political capital by not calling the current AI R&D regime insane, or find a way to lean into this valid persona of “we will either cooperate (if enough others cooperate) or win [...]
---
Outline:
(00:16) TL;DR
(02:02) Quotes
(03:22) The default strategy of marginal improvement from within the belly of a beast
(06:59) Noble intention murphyjitsu
(09:35) The need for a better strategy
The original text contained 8 footnotes which were omitted from this narration.
---
First published:
Source:
---
Narrated by TYPE III AUDIO.

26,319 Listeners

2,452 Listeners

8,521 Listeners

4,175 Listeners

93 Listeners

1,602 Listeners

9,938 Listeners

96 Listeners

517 Listeners

5,509 Listeners

15,892 Listeners

553 Listeners

131 Listeners

93 Listeners

465 Listeners