
Sign up to save your podcasts
Or
Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]
Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI.
In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf
(This report is cross-posted to the EA Forum.)
Abstract.We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the “skeptical” group (mainly “superforecasters”) predicted a 0.12% chance. Participants [...]
---
Outline:
(02:16) Extended Executive Summary
(02:47) Methods
(03:56) Results: What drives (and doesn’t drive) disagreement over AI risk
(04:35) Hypothesis #1 - Disagreements about AI risk persist due to lack of engagement among participants, low quality of participants, or because the skeptic and concerned groups did not understand each others arguments
(05:14) Hypothesis #2 - Disagreements about AI risk are explained by different short-term expectations (e.g. about AI capabilities, AI policy, or other factors that could be observed by 2030)
(07:56) Hypothesis #3 - Disagreements about AI risk are explained by different long-term expectations
(10:38) Hypothesis #4 - These groups have fundamental worldview disagreements that go beyond the discussion about AI
(11:34) Results: Forecasting methodology
(12:18) Broader scientific implications
(13:12) Directions for further research
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://forecastingresearch.org/s/AIcollaboration.pdf
Narrated by TYPE III AUDIO.
Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]
Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results of an adversarial collaboration focused on forecasting risks from AI.
In this post, we provide a brief overview of the methods, findings, and directions for further research. For much more analysis and discussion, see the full report: https://forecastingresearch.org/s/AIcollaboration.pdf
(This report is cross-posted to the EA Forum.)
Abstract.We brought together generalist forecasters and domain experts (n=22) who disagreed about the risk AI poses to humanity in the next century. The “concerned” participants (all of whom were domain experts) predicted a 20% chance of an AI-caused existential catastrophe by 2100, while the “skeptical” group (mainly “superforecasters”) predicted a 0.12% chance. Participants [...]
---
Outline:
(02:16) Extended Executive Summary
(02:47) Methods
(03:56) Results: What drives (and doesn’t drive) disagreement over AI risk
(04:35) Hypothesis #1 - Disagreements about AI risk persist due to lack of engagement among participants, low quality of participants, or because the skeptic and concerned groups did not understand each others arguments
(05:14) Hypothesis #2 - Disagreements about AI risk are explained by different short-term expectations (e.g. about AI capabilities, AI policy, or other factors that could be observed by 2030)
(07:56) Hypothesis #3 - Disagreements about AI risk are explained by different long-term expectations
(10:38) Hypothesis #4 - These groups have fundamental worldview disagreements that go beyond the discussion about AI
(11:34) Results: Forecasting methodology
(12:18) Broader scientific implications
(13:12) Directions for further research
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
Source:
Linkpost URL:
https://forecastingresearch.org/s/AIcollaboration.pdf
Narrated by TYPE III AUDIO.
26,446 Listeners
2,389 Listeners
7,910 Listeners
4,136 Listeners
87 Listeners
1,462 Listeners
9,095 Listeners
87 Listeners
389 Listeners
5,432 Listeners
15,174 Listeners
474 Listeners
121 Listeners
75 Listeners
461 Listeners