
Sign up to save your podcasts
Or


Hi everyone, Zershaaneh here!
Earlier this year, 80,000 Hours published an article explaining the risks of power-seeking AI.
This post includes some context, the summary from the article, and the table of contents with links to each section. (I thought this would be easier to navigate than if we just reposted the full article here!)
ContextThis is meant to be a detailed, introductory resource for understanding how advanced AI could disempower humanity and what people can do to stop it.
It replaces our 2022 article on the existential risks from AI, which highlighted misaligned power-seeking as our main concern but didn't focus entirely on it.
The original article made a mostly theoretical case for power-seeking risk — but the new one actually draws together recent empirical evidence which suggests AIs might develop goals we wouldn’t like, undermine humanity to achieve them, and avoid detection along the way.
What [...]---
Outline:
(00:35) Context
(01:11) What else is new?
(01:31) Why are we posting this here?
(02:15) Summary (from the article)
(03:09) Our overall view
(03:13) Recommended -- highest priority
---
First published:
Source:
---
Narrated by TYPE III AUDIO.
By EA Forum TeamHi everyone, Zershaaneh here!
Earlier this year, 80,000 Hours published an article explaining the risks of power-seeking AI.
This post includes some context, the summary from the article, and the table of contents with links to each section. (I thought this would be easier to navigate than if we just reposted the full article here!)
ContextThis is meant to be a detailed, introductory resource for understanding how advanced AI could disempower humanity and what people can do to stop it.
It replaces our 2022 article on the existential risks from AI, which highlighted misaligned power-seeking as our main concern but didn't focus entirely on it.
The original article made a mostly theoretical case for power-seeking risk — but the new one actually draws together recent empirical evidence which suggests AIs might develop goals we wouldn’t like, undermine humanity to achieve them, and avoid detection along the way.
What [...]---
Outline:
(00:35) Context
(01:11) What else is new?
(01:31) Why are we posting this here?
(02:15) Summary (from the article)
(03:09) Our overall view
(03:13) Recommended -- highest priority
---
First published:
Source:
---
Narrated by TYPE III AUDIO.