Astral Codex Ten Podcast

Updated Look At Long-Term AI Risks


Listen Later

https://astralcodexten.substack.com/p/updated-look-at-long-term-ai-risks

The last couple of posts here talked about long-term risks from AI, so I thought I'd highlight the results of a new expert survey on exactly what they are. There have been a lot of these surveys recently, but this one is a little different.

Starting from the beginning: in 2012-2014, Muller and Bostrom surveyed 550 people with various levels of claim to the title "AI expert" on the future of AI. People in philosophy of AI or other very speculative fields gave numbers around 20% chance of AI causing an "existential catastrophe" (eg human extinction); people in normal technical AI research gave numbers around 7%. In 2016-2017, Grace et al surveyed 1634 experts, 5% of whom predicted an extremely catastrophic outcome. Both of these surveys were vulnerable to response bias (eg the least speculative-minded people might think the whole issue was stupid and not even return the survey).

The new paper - Carlier, Clarke, and Schuett (not currently public, sorry, but you can read the summary here) - isn't exactly continuing in this tradition. Instead of surveying all AI experts, it surveys people who work in "AI safety and governance", ie people who are already concerned with AI being potentially dangerous, and who have dedicated their careers to addressing this. As such, they were more concerned on average than the people in previous surveys, and gave a median ~10% chance of AI-related catastrophe (~5% in the next 50 years, rising to ~25% if we don't make a directed effort to prevent it; means were a bit higher than medians). Individual experts' probability estimates ranged from 0.1% to 100% (this is how you know you're doing good futurology).

None of that is really surprising. What's new here is that they surveyed the experts on various ways AI could go wrong, to see which ones the experts were most concerned about. Going through each of them in a little more detail:

1. Superintelligence: This is the "classic" scenario that started the field, ably described by people like Nick Bostrom and Eliezer Yudkowsky. AI progress goes from human-level to vastly-above-human-level very quickly, maybe because slightly-above-human-level AIs themselves are speeding it along, or maybe because it turns out that if you can make an IQ 100 AI for $10,000 worth of compute, you can make an IQ 500 AI for $50,000. You end up with one (or a few) completely unexpected superintelligent AIs, which wield far-future technology and use it in unpredictable ways based on untested goal structures.

...more
View all episodesView all episodes
Download on the App Store

Astral Codex Ten PodcastBy Jeremiah

  • 4.8
  • 4.8
  • 4.8
  • 4.8
  • 4.8

4.8

129 ratings


More shows like Astral Codex Ten Podcast

View all
Freakonomics Radio by Freakonomics Radio + Stitcher

Freakonomics Radio

32,328 Listeners

The Partially Examined Life Philosophy Podcast by Mark Linsenmayer, Wes Alwan, Seth Paskin, Dylan Casey

The Partially Examined Life Philosophy Podcast

2,112 Listeners

Very Bad Wizards by Tamler Sommers & David Pizarro

Very Bad Wizards

2,672 Listeners

Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,345 Listeners

EconTalk by Russ Roberts

EconTalk

4,278 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,458 Listeners

The Glenn Show by Glenn Loury

The Glenn Show

2,279 Listeners

The Good Fight by Yascha Mounk

The Good Fight

906 Listeners

ChinaTalk by Jordan Schneider

ChinaTalk

292 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,195 Listeners

Your Undivided Attention by The Center for Humane Technology, Tristan Harris, Daniel Barcay and Aza Raskin

Your Undivided Attention

1,625 Listeners

Last Week in AI by Skynet Today

Last Week in AI

309 Listeners

Blocked and Reported by Katie Herzog and Jesse Singal

Blocked and Reported

3,831 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

531 Listeners

The AI Daily Brief: Artificial Intelligence News and Analysis by Nathaniel Whittemore

The AI Daily Brief: Artificial Intelligence News and Analysis

639 Listeners