Astral Codex Ten Podcast

Why Not Slow AI Progress?


Listen Later

Machine Alignment Monday 8/8/22

https://astralcodexten.substack.com/p/why-not-slow-ai-progress

The Broader Fossil Fuel Community

Imagine if oil companies and environmental activists were both considered part of the broader "fossil fuel community". Exxon and Shell would be "fossil fuel capabilities"; Greenpeace and the Sierra Club would be "fossil fuel safety" - two equally beloved parts of the rich diverse tapestry of fossil fuel-related work. They would all go to the same parties - fossil fuel community parties - and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.

This is how AI safety works now. AI capabilities - the work of researching bigger and better AI - is poorly differentiated from AI safety - the work of preventing AI from becoming dangerous. Two of the biggest AI safety teams are at DeepMind and OpenAI, ie the two biggest AI capabilities companies. Some labs straddle the line between capabilities and safety research.

Probably the people at DeepMind and OpenAI think this makes sense. Building AIs and aligning AIs could be complementary goals, like building airplanes and preventing the airplanes from crashing. It sounds superficially plausible.

But a lot of people in AI safety believe that unaligned AI could end the world, that we don't know how to align AI yet, and that our best chance is to delay superintelligent AI until we do know. Actively working on advancing AI seems like the opposite of that plan.

So maybe (the argument goes) we should take a cue from the environmental activists, and be hostile towards AI companies. Nothing violent or illegal - doing violent illegal things is the best way to lose 100% of your support immediately. But maybe glare a little at your friend who goes into AI capabilities research, instead of getting excited about how cool their new project is. Or agitate for government regulation of AI - either because you trust the government to regulate wisely, or because you at least expect them to come up with burdensome rules that hamstring the industry. While there are salient examples of government regulatory failure, some regulations - like the EU's ban on GMO or the US restrictions on nuclear power - have effectively stopped their respective industries.

...more
View all episodesView all episodes
Download on the App Store

Astral Codex Ten PodcastBy Jeremiah

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

129 ratings


More shows like Astral Codex Ten Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,246 Listeners

The Partially Examined Life Philosophy Podcast by Mark Linsenmayer, Wes Alwan, Seth Paskin, Dylan Casey

The Partially Examined Life Philosophy Podcast

2,118 Listeners

Very Bad Wizards by Tamler Sommers & David Pizarro

Very Bad Wizards

2,680 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,380 Listeners

EconTalk by Russ Roberts

EconTalk

4,270 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,461 Listeners

The Glenn Show by Glenn Loury

The Glenn Show

2,267 Listeners

The Good Fight by Yascha Mounk

The Good Fight

907 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

291 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,167 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,635 Listeners

Last Week in AI by Skynet Today

Last Week in AI

313 Listeners

Blocked and Reported by Katie Herzog and Jesse Singal

Blocked and Reported

3,833 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

551 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

688 Listeners