
Sign up to save your podcasts
Or


Epistemic status: Noticing confusion
There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in or gradual disempowerment don't become the default outcome of deploying aligned AGI.
Here's three reasons why we think we might want to shift much more resources towards governance and outreach:
1. MIRI's shift in strategy
The Machine Intelligence Research Institute (MIRI), traditionally focused on technical alignment research, has pivoted to broader outreach. They write in their 2024 end of year update:
Although we continue to support some AI alignment research efforts, we now believe that absent an international government effort to suspend frontier AI research, an extinction-level catastrophe is extremely likely.
As a consequence, our new focus is [...]
---
Outline:
(00:45) 1. MIRIs shift in strategy
(01:34) 2. Even if we solve technical alignment, Gradual Disempowerment seems to make catastrophe the default outcome
(02:52) 3. We have evidence that the governance naysayers are badly calibrated
(03:33) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
By LessWrongEpistemic status: Noticing confusion
There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in or gradual disempowerment don't become the default outcome of deploying aligned AGI.
Here's three reasons why we think we might want to shift much more resources towards governance and outreach:
1. MIRI's shift in strategy
The Machine Intelligence Research Institute (MIRI), traditionally focused on technical alignment research, has pivoted to broader outreach. They write in their 2024 end of year update:
Although we continue to support some AI alignment research efforts, we now believe that absent an international government effort to suspend frontier AI research, an extinction-level catastrophe is extremely likely.
As a consequence, our new focus is [...]
---
Outline:
(00:45) 1. MIRIs shift in strategy
(01:34) 2. Even if we solve technical alignment, Gradual Disempowerment seems to make catastrophe the default outcome
(02:52) 3. We have evidence that the governance naysayers are badly calibrated
(03:33) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.

26,318 Listeners

2,463 Listeners

8,594 Listeners

4,173 Listeners

97 Listeners

1,608 Listeners

10,019 Listeners

97 Listeners

525 Listeners

5,528 Listeners

16,001 Listeners

566 Listeners

133 Listeners

93 Listeners

471 Listeners