TYPE III AUDIO (All episodes)

"High-level hopes for AI alignment" by Holden Karnofsky


Listen Later

---
client: ea_forum
project_id: curated
feed_id: ai, ai_safety, ai_safety__governance, ai_safety__technical
narrator: not_t3a
qa: not_t3a
---

In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.

I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.

But while I think misalignment risk is serious and presents major challenges, I don’t agree with sentiments along the lines of “We haven’t figured out how to align an AI, so if transformative AI comes soon, we’re doomed.” Here I’m going to talk about some of my high-level hopes for how we might end up avoiding this risk.

Original article:
https://forum.effectivealtruism.org/posts/rJRw78oihoT5paFGd/high-level-hopes-for-ai-alignment

Narrated by Holden Karnofsky for the Cold Takes blog.

Share feedback on this narration.

...more
View all episodesView all episodes
Download on the App Store

TYPE III AUDIO (All episodes)By TYPE III AUDIO