
Sign up to save your podcasts
Or


tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held “traditional” views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer.
What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many [...]
---
Outline:
(02:00) Questions
(03:46) Participants
(06:29) A very brief summary of what people said
(06:33) What will happen?
(07:21) What should we do about it?
(07:52) What mistakes have been made?
(08:34) Limitations
(10:11) Subsequent posts
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongtldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held “traditional” views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer.
What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many [...]
---
Outline:
(02:00) Questions
(03:46) Participants
(06:29) A very brief summary of what people said
(06:33) What will happen?
(07:21) What should we do about it?
(07:52) What mistakes have been made?
(08:34) Limitations
(10:11) Subsequent posts
---
First published:
Source:
Narrated by TYPE III AUDIO.

112,909 Listeners

130 Listeners

7,221 Listeners

535 Listeners

16,221 Listeners

4 Listeners

14 Listeners

2 Listeners