EA Forum Podcast (All audio)

“New 80,000 Hours problem profile on the risks of power-seeking AI” by Zershaaneh Qureshi, 80000_Hours


Listen Later

Hi everyone, Zershaaneh here!

Earlier this year, 80,000 Hours published an article explaining the risks of power-seeking AI.

This post includes some context, the summary from the article, and the table of contents with links to each section. (I thought this would be easier to navigate than if we just reposted the full article here!)

Context

This is meant to be a detailed, introductory resource for understanding how advanced AI could disempower humanity and what people can do to stop it.

It replaces our 2022 article on the existential risks from AI, which highlighted misaligned power-seeking as our main concern but didn't focus entirely on it.

The original article made a mostly theoretical case for power-seeking risk — but the new one actually draws together recent empirical evidence which suggests AIs might develop goals we wouldn’t like, undermine humanity to achieve them, and avoid detection along the way.

What [...]

---

Outline:

(00:35) Context

(01:11) What else is new?

(01:31) Why are we posting this here?

(02:15) Summary (from the article)

(03:09) Our overall view

(03:13) Recommended -- highest priority

---

First published:

October 28th, 2025

Source:

https://forum.effectivealtruism.org/posts/QrP3DAvyS4gTawBHc/new-80-000-hours-problem-profile-on-the-risks-of-power

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

EA Forum Podcast (All audio)By EA Forum Team