The Nonlinear Library

EA - What do XPT forecasts tell us about AI risk? by Forecasting Research Institute


Listen Later

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What do XPT forecasts tell us about AI risk?, published by Forecasting Research Institute on July 19, 2023 on The Effective Altruism Forum.
This post was co-authored by the Forecasting Research Institute and Rose Hadshar. Thanks to Josh Rosenberg for managing this work, Zachary Jacobs and Molly Hickman for the underlying data analysis, Coralie Consigny and Bridget Williams for fact-checking and copy-editing, the whole FRI XPT team for all their work on this project, and our external reviewers.
In 2022, the Forecasting Research Institute (FRI) ran the Existential Risk Persuasion Tournament (XPT). From June through October 2022, 169 forecasters, including 80 superforecasters and 89 experts, developed forecasts on various questions related to existential and catastrophic risk. Forecasters moved through a four-stage deliberative process that was designed to incentivize them not only to make accurate predictions but also to provide persuasive rationales that boosted the predictive accuracy of others' forecasts. Forecasters stopped updating their forecasts on 31st October 2022, and are not currently updating on an ongoing basis. FRI plans to run future iterations of the tournament, and open up the questions more broadly for other forecasters.
You can see the overall results of the XPT here.
Some of the questions were related to AI risk. This post:
Sets out the XPT forecasts on AI risk, and puts them in context.
Lays out the arguments given in the XPT for and against these forecasts.
Offers some thoughts on what these forecasts and arguments show us about AI risk.
TL;DR
XPT superforecasters predicted that catastrophic and extinction risk from AI by 2030 is very low (0.01% catastrophic risk and 0.0001% extinction risk).
XPT superforecasters predicted that catastrophic risk from nuclear weapons by 2100 is almost twice as likely as catastrophic risk from AI by 2100 (4% vs 2.13%).
XPT superforecasters predicted that extinction risk from AI by 2050 and 2100 is roughly an order of magnitude larger than extinction risk from nuclear, which in turn is an order of magnitude larger than non-anthropogenic extinction risk (see here for details).
XPT superforecasters more than quadruple their forecasts for AI extinction risk by 2100 if conditioned on AGI or TAI by 2070 (see here for details).
XPT domain experts predicted that AI extinction risk by 2100 is far greater than XPT superforecasters do (3% for domain experts, and 0.38% for superforecasters by 2100).
Although XPT superforecasters and experts disagreed substantially about AI risk, both superforecasters and experts still prioritized AI as an area for marginal resource allocation (see here for details).
It's unclear how accurate these forecasts will prove, particularly as superforecasters have not been evaluated on this timeframe before.
The forecasts
In the table below, we present forecasts from the following groups:
Superforecasters: median forecast across superforecasters in the XPT.
Domain experts: median forecasts across all AI experts in the XPT.
(See our discussion of aggregation choices (pp. 20-22) for why we focus on medians.)
QuestionForecastersN203020502100AI Catastrophic risk (>10% of humans die within 5 years)Superforecasters880.01%0.73%2.13%Domain experts300.35%5%12%AI Extinction risk (human population <5,000)Superforecasters880.0001%0.03%0.38%Domain experts290.02%1.1%3%
The forecasts in context
Different methods have been used to estimate AI risk:
Surveying experts of various kinds, e.g. Sanders and Bostrom, 2008; Grace et al. 2017.
Doing in-depth investigations, e.g. Ord, 2020; Carlsmith, 2021.
The XPT forecasts are distinctive relative to expert surveys in that:
The forecasts were incentivized: for long-run questions, XPT used 'reciprocal scoring' rules to incentivize accurate forecasts (see here for details).
Fore...
...more
View all episodesView all episodes
Download on the App Store

The Nonlinear LibraryBy The Nonlinear Fund

  • 4.6
  • 4.6
  • 4.6
  • 4.6
  • 4.6

4.6

8 ratings