AXRP - the AI X-risk Research Podcast

10 - AI's Future and Impacts with Katja Grace


Listen Later

When going about trying to ensure that AI does not cause an existential catastrophe, it's likely important to understand how AI will develop in the future, and why exactly it might or might not cause such a catastrophe. In this episode, I interview Katja Grace, researcher at AI Impacts, who's done work surveying AI researchers about when they expect superhuman AI to be reached, collecting data about how rapidly AI tends to progress, and thinking about the weak points in arguments that AI could be catastrophic for humanity.

 

Topics we discuss:

 - 00:00:34 - AI Impacts and its research

 - 00:08:59 - How to forecast the future of AI

 - 00:13:33 - Results of surveying AI researchers

 - 00:30:41 - Work related to forecasting AI takeoff speeds

   - 00:31:11 - How long it takes AI to cross the human skill range

   - 00:42:47 - How often technologies have discontinuous progress

   - 00:50:06 - Arguments for and against fast takeoff of AI

 - 01:04:00 - Coherence arguments

 - 01:12:15 - Arguments that AI might cause existential catastrophe, and counter-arguments

   - 01:13:58 - The size of the super-human range of intelligence

   - 01:17:22 - The dangers of agentic AI

   - 01:25:45 - The difficulty of human-compatible goals

   - 01:33:54 - The possibility of AI destroying everything

 - 01:49:42 - The future of AI Impacts

 - 01:52:17 - AI Impacts vs academia

 - 02:00:25 - What AI x-risk researchers do wrong

 - 02:01:43 - How to follow Katja's and AI Impacts' work

 

The transcript: axrp.net/episode/2021/07/23/episode-10-ais-future-and-dangers-katja-grace.html

 

"When Will AI Exceed Human Performance? Evidence from AI Experts": arxiv.org/abs/1705.08807

AI Impacts page of more complete survey results: aiimpacts.org/2016-expert-survey-on-progress-in-ai

Likelihood of discontinuous progress around the development of AGI: aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi

Discontinuous progress investigation: aiimpacts.org/discontinuous-progress-investigation

The range of human intelligence: aiimpacts.org/is-the-range-of-human-intelligence-small

...more
View all episodesView all episodes
Download on the App Store

AXRP - the AI X-risk Research PodcastBy Daniel Filan

  • 4.4
  • 4.4
  • 4.4
  • 4.4
  • 4.4

4.4

8 ratings


More shows like AXRP - the AI X-risk Research Podcast

View all
Making Sense with Sam Harris by Sam Harris

Making Sense with Sam Harris

26,377 Listeners

Conversations with Tyler by Mercatus Center at George Mason University

Conversations with Tyler

2,398 Listeners

Odd Lots by Bloomberg

Odd Lots

1,779 Listeners

Super Data Science: ML & AI Podcast with Jon Krohn by Jon Krohn

Super Data Science: ML & AI Podcast with Jon Krohn

296 Listeners

Future of Life Institute Podcast by Future of Life Institute

Future of Life Institute Podcast

104 Listeners

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas by Sean Carroll | Wondery

Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

4,097 Listeners

ManifoldOne by Steve Hsu

ManifoldOne

87 Listeners

Last Week in AI by Skynet Today

Last Week in AI

281 Listeners

Machine Learning Street Talk (MLST) by Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

88 Listeners

Dwarkesh Podcast by Dwarkesh Patel

Dwarkesh Podcast

354 Listeners

Robinson's Podcast by Robinson Erhardt

Robinson's Podcast

199 Listeners

Latent Space: The AI Engineer Podcast by swyx + Alessio

Latent Space: The AI Engineer Podcast

63 Listeners

"Upstream" with Erik Torenberg by Erik Torenberg

"Upstream" with Erik Torenberg

64 Listeners

"Econ 102" with Noah Smith and Erik Torenberg by Turpentine

"Econ 102" with Noah Smith and Erik Torenberg

136 Listeners

Complex Systems with Patrick McKenzie (patio11) by Patrick McKenzie

Complex Systems with Patrick McKenzie (patio11)

116 Listeners