
Sign up to save your podcasts
Or
Epistemic status: Noticing confusion
There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in or gradual disempowerment don't become the default outcome of deploying aligned AGI.
Here's three reasons why we think we might want to shift much more resources towards governance and outreach:
1. MIRI's shift in strategy
The Machine Intelligence Research Institute (MIRI), traditionally focused on technical alignment research, has pivoted to broader outreach. They write in their 2024 end of year update:
Although we continue to support some AI alignment research efforts, we now believe that absent an international government effort to suspend frontier AI research, an extinction-level catastrophe is extremely likely.
As a consequence, our new focus is [...]
---
Outline:
(00:45) 1. MIRIs shift in strategy
(01:34) 2. Even if we solve technical alignment, Gradual Disempowerment seems to make catastrophe the default outcome
(02:52) 3. We have evidence that the governance naysayers are badly calibrated
(03:33) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
Epistemic status: Noticing confusion
There is little discussion happening on LessWrong with regards to AI governance and outreach. Meanwhile, these efforts could buy us time to figure out technical alignment. And even if we figure out technical alignment, we still have to solve crucial governmental challenges so that totalitarian lock-in or gradual disempowerment don't become the default outcome of deploying aligned AGI.
Here's three reasons why we think we might want to shift much more resources towards governance and outreach:
1. MIRI's shift in strategy
The Machine Intelligence Research Institute (MIRI), traditionally focused on technical alignment research, has pivoted to broader outreach. They write in their 2024 end of year update:
Although we continue to support some AI alignment research efforts, we now believe that absent an international government effort to suspend frontier AI research, an extinction-level catastrophe is extremely likely.
As a consequence, our new focus is [...]
---
Outline:
(00:45) 1. MIRIs shift in strategy
(01:34) 2. Even if we solve technical alignment, Gradual Disempowerment seems to make catastrophe the default outcome
(02:52) 3. We have evidence that the governance naysayers are badly calibrated
(03:33) Conclusion
---
First published:
Source:
Narrated by TYPE III AUDIO.
26,333 Listeners
2,380 Listeners
8,018 Listeners
4,129 Listeners
90 Listeners
1,493 Listeners
9,264 Listeners
91 Listeners
425 Listeners
5,446 Listeners
15,488 Listeners
504 Listeners
129 Listeners
72 Listeners
465 Listeners