LessWrong (30+ Karma)

“Nonpartisan AI safety” by Yair Halberstadt


Listen Later

AI alignment is probably the most pressing issue of our time. Unfortunately it's also become one of the most controversial, with AI accelerationists accusing AI doomers/ai-not-kill-everyoneism-ers of being luddites who would rather keep humanity shackled to the horse and plow than risk any progress, whilst the doomers in turn accuse accels of rushing humanity as fast as it can straight off a cliff.

As Robin Hanson likes to point out, trying to change policy on a polarised issue is backbreaking work. But if you can find a way to pull sideways you can find ways to make easy progress with noone pulling the other way.

So can we think of a research program that:

a) will produce critically useful results even if AI isn't dangerous/benefits of AI far outweigh costs.

b) would likely be sufficient to prevent doom if the project is successful and AI does turn out to [...]

---

First published:

February 10th, 2025

Source:

https://www.lesswrong.com/posts/QpaWHYEQomyQTBKw5/nonpartisan-ai-safety

---

Narrated by TYPE III AUDIO.

...more
View all episodesView all episodes
Download on the App Store

LessWrong (30+ Karma)By LessWrong


More shows like LessWrong (30+ Karma)

View all
The Daily by The New York Times

The Daily

113,041 Listeners

Astral Codex Ten Podcast by Jeremiah

Astral Codex Ten Podcast

130 Listeners

Interesting Times with Ross Douthat by New York Times Opinion

Interesting Times with Ross Douthat

7,230 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The Ezra Klein Show by New York Times Opinion

The Ezra Klein Show

16,229 Listeners

AI Article Readings by Readings of great articles in AI voices

AI Article Readings

4 Listeners

Doom Debates by Liron Shapira

Doom Debates

14 Listeners

LessWrong posts by zvi by zvi

LessWrong posts by zvi

2 Listeners