AI Safety Fundamentals

Overview of How AI Might Exacerbate Long-Running Catastrophic Risks


Listen Later

Developments in AI could exacerbate long-running catastrophic risks, including bioterrorism, disinformation and resulting institutional dysfunction, misuse of concentrated power, nuclear and conventional war, other coordination failures, and unknown risks. This document compiles research on how AI might raise these risks. (Other material in this course discusses more novel risks from AI.) We draw heavily from previous overviews by academics, particularly Dafoe (2020) and Hendrycks et al. (2023).


Source:

https://aisafetyfundamentals.com/governance-blog/overview-of-ai-risk-exacerbation


Narrated for AI Safety Fundamentals by Perrin Walker of TYPE III AUDIO.

---

A podcast by BlueDot Impact.

Learn more on the AI Safety Fundamentals website.

...more
View all episodesView all episodes
Download on the App Store

AI Safety FundamentalsBy BlueDot Impact